00:00:00.000 Started by upstream project "autotest-per-patch" build number 121340 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.175 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.175 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.091 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.102 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.113 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:05.113 > git config core.sparsecheckout # timeout=10 00:00:05.124 > git read-tree -mu HEAD # timeout=10 00:00:05.141 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:05.163 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:05.163 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:05.249 [Pipeline] Start of Pipeline 00:00:05.260 [Pipeline] library 00:00:05.261 Loading library shm_lib@master 00:00:05.261 Library shm_lib@master is cached. Copying from home. 00:00:05.275 [Pipeline] node 00:00:05.287 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.288 [Pipeline] { 00:00:05.299 [Pipeline] catchError 00:00:05.300 [Pipeline] { 00:00:05.311 [Pipeline] wrap 00:00:05.318 [Pipeline] { 00:00:05.324 [Pipeline] stage 00:00:05.325 [Pipeline] { (Prologue) 00:00:05.525 [Pipeline] sh 00:00:05.810 + logger -p user.info -t JENKINS-CI 00:00:05.829 [Pipeline] echo 00:00:05.830 Node: CYP9 00:00:05.835 [Pipeline] sh 00:00:06.136 [Pipeline] setCustomBuildProperty 00:00:06.146 [Pipeline] echo 00:00:06.147 Cleanup processes 00:00:06.151 [Pipeline] sh 00:00:06.437 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.437 3983372 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.449 [Pipeline] sh 00:00:06.734 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.734 ++ grep -v 'sudo pgrep' 00:00:06.734 ++ awk '{print $1}' 00:00:06.734 + sudo kill -9 00:00:06.734 + true 00:00:06.748 [Pipeline] cleanWs 00:00:06.789 [WS-CLEANUP] Deleting project workspace... 00:00:06.789 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.797 [WS-CLEANUP] done 00:00:06.801 [Pipeline] setCustomBuildProperty 00:00:06.814 [Pipeline] sh 00:00:07.098 + sudo git config --global --replace-all safe.directory '*' 00:00:07.180 [Pipeline] nodesByLabel 00:00:07.181 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.227 [Pipeline] httpRequest 00:00:07.233 HttpMethod: GET 00:00:07.234 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.236 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.246 Response Code: HTTP/1.1 200 OK 00:00:07.247 Success: Status code 200 is in the accepted range: 200,404 00:00:07.248 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:13.200 [Pipeline] sh 00:00:13.485 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:13.507 [Pipeline] httpRequest 00:00:13.513 HttpMethod: GET 00:00:13.513 URL: http://10.211.164.96/packages/spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:00:13.514 Sending request to url: http://10.211.164.96/packages/spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:00:13.538 Response Code: HTTP/1.1 200 OK 00:00:13.539 Success: Status code 200 is in the accepted range: 200,404 00:00:13.539 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:01:11.923 [Pipeline] sh 00:01:12.209 + tar --no-same-owner -xf spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:01:14.772 [Pipeline] sh 00:01:15.061 + git -C spdk log --oneline -n5 00:01:15.061 6651b13f7 test/scheduler: Enable load_balancing test back 00:01:15.061 c3fd276bb test/scheduler: Stop using cgroups 00:01:15.061 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:01:15.061 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:01:15.061 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:01:15.075 [Pipeline] } 00:01:15.094 [Pipeline] // stage 00:01:15.106 [Pipeline] stage 00:01:15.109 [Pipeline] { (Prepare) 00:01:15.131 [Pipeline] writeFile 00:01:15.149 [Pipeline] sh 00:01:15.437 + logger -p user.info -t JENKINS-CI 00:01:15.452 [Pipeline] sh 00:01:15.740 + logger -p user.info -t JENKINS-CI 00:01:15.755 [Pipeline] sh 00:01:16.043 + cat autorun-spdk.conf 00:01:16.043 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.043 SPDK_TEST_NVMF=1 00:01:16.043 SPDK_TEST_NVME_CLI=1 00:01:16.043 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.043 SPDK_TEST_NVMF_NICS=e810 00:01:16.043 SPDK_TEST_VFIOUSER=1 00:01:16.043 SPDK_RUN_UBSAN=1 00:01:16.043 NET_TYPE=phy 00:01:16.052 RUN_NIGHTLY=0 00:01:16.057 [Pipeline] readFile 00:01:16.084 [Pipeline] withEnv 00:01:16.086 [Pipeline] { 00:01:16.101 [Pipeline] sh 00:01:16.391 + set -ex 00:01:16.391 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:16.391 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.391 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.391 ++ SPDK_TEST_NVMF=1 00:01:16.391 ++ SPDK_TEST_NVME_CLI=1 00:01:16.391 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.391 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.391 ++ SPDK_TEST_VFIOUSER=1 00:01:16.391 ++ SPDK_RUN_UBSAN=1 00:01:16.391 ++ NET_TYPE=phy 00:01:16.391 ++ RUN_NIGHTLY=0 00:01:16.391 + case $SPDK_TEST_NVMF_NICS in 00:01:16.391 + DRIVERS=ice 00:01:16.391 + [[ tcp == \r\d\m\a ]] 00:01:16.391 + [[ -n ice ]] 00:01:16.391 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:16.391 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:16.391 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:16.391 rmmod: ERROR: Module irdma is not currently loaded 00:01:16.391 rmmod: ERROR: Module i40iw is not currently loaded 00:01:16.391 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:16.391 + true 00:01:16.391 + for D in $DRIVERS 00:01:16.391 + sudo modprobe ice 00:01:16.391 + exit 0 00:01:16.402 [Pipeline] } 00:01:16.421 [Pipeline] // withEnv 00:01:16.427 [Pipeline] } 00:01:16.444 [Pipeline] // stage 00:01:16.454 [Pipeline] catchError 00:01:16.456 [Pipeline] { 00:01:16.471 [Pipeline] timeout 00:01:16.471 Timeout set to expire in 40 min 00:01:16.473 [Pipeline] { 00:01:16.488 [Pipeline] stage 00:01:16.490 [Pipeline] { (Tests) 00:01:16.506 [Pipeline] sh 00:01:16.795 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.795 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.795 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.795 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:16.795 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.795 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.795 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:16.795 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.795 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.795 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.795 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.795 + source /etc/os-release 00:01:16.795 ++ NAME='Fedora Linux' 00:01:16.795 ++ VERSION='38 (Cloud Edition)' 00:01:16.795 ++ ID=fedora 00:01:16.795 ++ VERSION_ID=38 00:01:16.795 ++ VERSION_CODENAME= 00:01:16.795 ++ PLATFORM_ID=platform:f38 00:01:16.795 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:16.795 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.795 ++ LOGO=fedora-logo-icon 00:01:16.795 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:16.795 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.796 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:16.796 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.796 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.796 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.796 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:16.796 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.796 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:16.796 ++ SUPPORT_END=2024-05-14 00:01:16.796 ++ VARIANT='Cloud Edition' 00:01:16.796 ++ VARIANT_ID=cloud 00:01:16.796 + uname -a 00:01:16.796 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:16.796 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.344 Hugepages 00:01:19.344 node hugesize free / total 00:01:19.344 node0 1048576kB 0 / 0 00:01:19.344 node0 2048kB 0 / 0 00:01:19.344 node1 1048576kB 0 / 0 00:01:19.344 node1 2048kB 0 / 0 00:01:19.344 00:01:19.345 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.345 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:19.345 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:19.345 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:19.345 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:19.345 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:19.345 + rm -f /tmp/spdk-ld-path 00:01:19.345 + source autorun-spdk.conf 00:01:19.345 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.345 ++ SPDK_TEST_NVMF=1 00:01:19.345 ++ SPDK_TEST_NVME_CLI=1 00:01:19.345 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.345 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.345 ++ SPDK_TEST_VFIOUSER=1 00:01:19.345 ++ SPDK_RUN_UBSAN=1 00:01:19.345 ++ NET_TYPE=phy 00:01:19.345 ++ RUN_NIGHTLY=0 00:01:19.345 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.345 + [[ -n '' ]] 00:01:19.345 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.606 + for M in /var/spdk/build-*-manifest.txt 00:01:19.606 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.606 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.606 + for M in /var/spdk/build-*-manifest.txt 00:01:19.606 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.606 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.606 ++ uname 00:01:19.606 + [[ Linux == \L\i\n\u\x ]] 00:01:19.606 + sudo dmesg -T 00:01:19.606 + sudo dmesg --clear 00:01:19.606 + dmesg_pid=3984337 00:01:19.606 + [[ Fedora Linux == FreeBSD ]] 00:01:19.606 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.606 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.606 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.606 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:19.606 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:19.606 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.606 + sudo dmesg -Tw 00:01:19.606 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.606 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.606 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.606 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.606 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.606 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.606 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.606 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.606 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.606 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.606 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.606 Test configuration: 00:01:19.606 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.606 SPDK_TEST_NVMF=1 00:01:19.606 SPDK_TEST_NVME_CLI=1 00:01:19.606 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.606 SPDK_TEST_NVMF_NICS=e810 00:01:19.606 SPDK_TEST_VFIOUSER=1 00:01:19.606 SPDK_RUN_UBSAN=1 00:01:19.606 NET_TYPE=phy 00:01:19.606 RUN_NIGHTLY=0 02:20:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:19.606 02:20:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.606 02:20:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.606 02:20:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.606 02:20:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.606 02:20:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.606 02:20:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.606 02:20:53 -- paths/export.sh@5 -- $ export PATH 00:01:19.606 02:20:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.606 02:20:53 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:19.606 02:20:53 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:19.606 02:20:53 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714177253.XXXXXX 00:01:19.607 02:20:53 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714177253.LAAODx 00:01:19.607 02:20:53 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:19.607 02:20:53 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:19.607 02:20:53 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:19.607 02:20:53 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.607 02:20:53 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.607 02:20:53 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:19.607 02:20:53 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:19.607 02:20:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.607 02:20:53 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:19.607 02:20:53 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:19.607 02:20:53 -- pm/common@17 -- $ local monitor 00:01:19.607 02:20:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.607 02:20:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3984373 00:01:19.607 02:20:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.607 02:20:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3984375 00:01:19.607 02:20:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.607 02:20:53 -- pm/common@21 -- $ date +%s 00:01:19.607 02:20:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3984377 00:01:19.607 02:20:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.607 02:20:53 -- pm/common@21 -- $ date +%s 00:01:19.607 02:20:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3984380 00:01:19.607 02:20:53 -- pm/common@26 -- $ sleep 1 00:01:19.607 02:20:53 -- pm/common@21 -- $ date +%s 00:01:19.867 02:20:53 -- pm/common@21 -- $ date +%s 00:01:19.867 02:20:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714177253 00:01:19.867 02:20:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714177253 00:01:19.867 02:20:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714177253 00:01:19.867 02:20:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714177253 00:01:19.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714177253_collect-vmstat.pm.log 00:01:19.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714177253_collect-cpu-load.pm.log 00:01:19.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714177253_collect-bmc-pm.bmc.pm.log 00:01:19.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714177253_collect-cpu-temp.pm.log 00:01:20.809 02:20:54 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:20.809 02:20:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.809 02:20:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.809 02:20:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.809 02:20:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.809 Sat Apr 27 12:20:54 AM UTC 2024 00:01:20.809 02:20:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.809 v24.05-pre-451-g6651b13f7 00:01:20.809 02:20:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.809 02:20:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.809 02:20:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.809 02:20:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:20.809 02:20:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:20.809 02:20:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.809 ************************************ 00:01:20.809 START TEST ubsan 00:01:20.809 ************************************ 00:01:20.809 02:20:54 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:20.809 using ubsan 00:01:20.809 00:01:20.809 real 0m0.001s 00:01:20.809 user 0m0.000s 00:01:20.809 sys 0m0.000s 00:01:20.809 02:20:54 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:20.809 02:20:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.809 ************************************ 00:01:20.809 END TEST ubsan 00:01:20.809 ************************************ 00:01:21.070 02:20:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.070 02:20:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.070 02:20:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.070 02:20:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.070 02:20:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.070 02:20:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.070 02:20:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.070 02:20:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.070 02:20:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:21.070 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:21.070 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.641 Using 'verbs' RDMA provider 00:01:37.135 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:49.377 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.377 Creating mk/config.mk...done. 00:01:49.377 Creating mk/cc.flags.mk...done. 00:01:49.377 Type 'make' to build. 00:01:49.377 02:21:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:49.377 02:21:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:49.377 02:21:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:49.377 02:21:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.377 ************************************ 00:01:49.377 START TEST make 00:01:49.377 ************************************ 00:01:49.378 02:21:22 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:49.950 make[1]: Nothing to be done for 'all'. 00:01:51.333 The Meson build system 00:01:51.333 Version: 1.3.1 00:01:51.333 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:51.333 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.333 Build type: native build 00:01:51.333 Project name: libvfio-user 00:01:51.333 Project version: 0.0.1 00:01:51.333 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.333 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.333 Host machine cpu family: x86_64 00:01:51.333 Host machine cpu: x86_64 00:01:51.333 Run-time dependency threads found: YES 00:01:51.333 Library dl found: YES 00:01:51.333 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.333 Run-time dependency json-c found: YES 0.17 00:01:51.333 Run-time dependency cmocka found: YES 1.1.7 00:01:51.333 Program pytest-3 found: NO 00:01:51.333 Program flake8 found: NO 00:01:51.333 Program misspell-fixer found: NO 00:01:51.333 Program restructuredtext-lint found: NO 00:01:51.333 Program valgrind found: YES (/usr/bin/valgrind) 00:01:51.333 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.333 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.333 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.333 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.333 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:51.333 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:51.333 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.333 Build targets in project: 8 00:01:51.333 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:51.333 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:51.333 00:01:51.333 libvfio-user 0.0.1 00:01:51.333 00:01:51.333 User defined options 00:01:51.333 buildtype : debug 00:01:51.333 default_library: shared 00:01:51.333 libdir : /usr/local/lib 00:01:51.333 00:01:51.333 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.333 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:51.592 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:51.592 [2/37] Compiling C object samples/null.p/null.c.o 00:01:51.592 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:51.592 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:51.592 [5/37] Compiling C object samples/server.p/server.c.o 00:01:51.592 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:51.592 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:51.592 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:51.592 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:51.592 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:51.592 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:51.592 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:51.592 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:51.592 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:51.592 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:51.592 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:51.592 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:51.592 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:51.592 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:51.592 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:51.592 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:51.592 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:51.592 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:51.592 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:51.592 [25/37] Compiling C object samples/client.p/client.c.o 00:01:51.592 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:51.592 [27/37] Linking target samples/client 00:01:51.592 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:51.592 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:51.592 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:51.592 [31/37] Linking target test/unit_tests 00:01:51.853 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:51.853 [33/37] Linking target samples/null 00:01:51.853 [34/37] Linking target samples/server 00:01:51.853 [35/37] Linking target samples/gpio-pci-idio-16 00:01:51.853 [36/37] Linking target samples/lspci 00:01:51.853 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:51.853 INFO: autodetecting backend as ninja 00:01:51.853 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.853 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.115 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.115 ninja: no work to do. 00:01:58.774 The Meson build system 00:01:58.774 Version: 1.3.1 00:01:58.774 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:58.774 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:58.774 Build type: native build 00:01:58.774 Program cat found: YES (/usr/bin/cat) 00:01:58.774 Project name: DPDK 00:01:58.774 Project version: 23.11.0 00:01:58.774 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:58.774 C linker for the host machine: cc ld.bfd 2.39-16 00:01:58.774 Host machine cpu family: x86_64 00:01:58.774 Host machine cpu: x86_64 00:01:58.774 Message: ## Building in Developer Mode ## 00:01:58.774 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.774 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.774 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.774 Program python3 found: YES (/usr/bin/python3) 00:01:58.774 Program cat found: YES (/usr/bin/cat) 00:01:58.774 Compiler for C supports arguments -march=native: YES 00:01:58.774 Checking for size of "void *" : 8 00:01:58.774 Checking for size of "void *" : 8 (cached) 00:01:58.774 Library m found: YES 00:01:58.774 Library numa found: YES 00:01:58.774 Has header "numaif.h" : YES 00:01:58.774 Library fdt found: NO 00:01:58.774 Library execinfo found: NO 00:01:58.774 Has header "execinfo.h" : YES 00:01:58.774 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:58.774 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.774 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.774 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.774 Run-time dependency openssl found: YES 3.0.9 00:01:58.774 Run-time dependency libpcap found: YES 1.10.4 00:01:58.774 Has header "pcap.h" with dependency libpcap: YES 00:01:58.774 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.774 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.774 Compiler for C supports arguments -Wformat: YES 00:01:58.774 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.774 Compiler for C supports arguments -Wformat-security: NO 00:01:58.774 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.774 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.774 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.774 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.774 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.774 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.774 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.774 Compiler for C supports arguments -Wundef: YES 00:01:58.774 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.774 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.774 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.774 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.774 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.774 Program objdump found: YES (/usr/bin/objdump) 00:01:58.774 Compiler for C supports arguments -mavx512f: YES 00:01:58.774 Checking if "AVX512 checking" compiles: YES 00:01:58.774 Fetching value of define "__SSE4_2__" : 1 00:01:58.774 Fetching value of define "__AES__" : 1 00:01:58.774 Fetching value of define "__AVX__" : 1 00:01:58.774 Fetching value of define "__AVX2__" : 1 00:01:58.774 Fetching value of define "__AVX512BW__" : 1 00:01:58.774 Fetching value of define "__AVX512CD__" : 1 00:01:58.774 Fetching value of define "__AVX512DQ__" : 1 00:01:58.774 Fetching value of define "__AVX512F__" : 1 00:01:58.774 Fetching value of define "__AVX512VL__" : 1 00:01:58.774 Fetching value of define "__PCLMUL__" : 1 00:01:58.774 Fetching value of define "__RDRND__" : 1 00:01:58.774 Fetching value of define "__RDSEED__" : 1 00:01:58.774 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:58.774 Fetching value of define "__znver1__" : (undefined) 00:01:58.774 Fetching value of define "__znver2__" : (undefined) 00:01:58.774 Fetching value of define "__znver3__" : (undefined) 00:01:58.774 Fetching value of define "__znver4__" : (undefined) 00:01:58.774 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.774 Message: lib/log: Defining dependency "log" 00:01:58.774 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.774 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.774 Checking for function "getentropy" : NO 00:01:58.774 Message: lib/eal: Defining dependency "eal" 00:01:58.774 Message: lib/ring: Defining dependency "ring" 00:01:58.774 Message: lib/rcu: Defining dependency "rcu" 00:01:58.774 Message: lib/mempool: Defining dependency "mempool" 00:01:58.774 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.774 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.774 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.774 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.774 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.774 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.774 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:58.774 Compiler for C supports arguments -mpclmul: YES 00:01:58.774 Compiler for C supports arguments -maes: YES 00:01:58.774 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.774 Compiler for C supports arguments -mavx512bw: YES 00:01:58.774 Compiler for C supports arguments -mavx512dq: YES 00:01:58.774 Compiler for C supports arguments -mavx512vl: YES 00:01:58.774 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.774 Compiler for C supports arguments -mavx2: YES 00:01:58.774 Compiler for C supports arguments -mavx: YES 00:01:58.774 Message: lib/net: Defining dependency "net" 00:01:58.774 Message: lib/meter: Defining dependency "meter" 00:01:58.774 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.774 Message: lib/pci: Defining dependency "pci" 00:01:58.774 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.774 Message: lib/hash: Defining dependency "hash" 00:01:58.774 Message: lib/timer: Defining dependency "timer" 00:01:58.774 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.774 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.774 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.774 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.774 Message: lib/power: Defining dependency "power" 00:01:58.774 Message: lib/reorder: Defining dependency "reorder" 00:01:58.774 Message: lib/security: Defining dependency "security" 00:01:58.774 Has header "linux/userfaultfd.h" : YES 00:01:58.774 Has header "linux/vduse.h" : YES 00:01:58.774 Message: lib/vhost: Defining dependency "vhost" 00:01:58.774 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.774 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.774 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.774 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.774 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.774 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.774 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.774 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.774 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.774 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.774 Program doxygen found: YES (/usr/bin/doxygen) 00:01:58.774 Configuring doxy-api-html.conf using configuration 00:01:58.774 Configuring doxy-api-man.conf using configuration 00:01:58.774 Program mandb found: YES (/usr/bin/mandb) 00:01:58.774 Program sphinx-build found: NO 00:01:58.774 Configuring rte_build_config.h using configuration 00:01:58.774 Message: 00:01:58.774 ================= 00:01:58.774 Applications Enabled 00:01:58.774 ================= 00:01:58.774 00:01:58.774 apps: 00:01:58.774 00:01:58.774 00:01:58.774 Message: 00:01:58.774 ================= 00:01:58.774 Libraries Enabled 00:01:58.774 ================= 00:01:58.774 00:01:58.775 libs: 00:01:58.775 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.775 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.775 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.775 00:01:58.775 Message: 00:01:58.775 =============== 00:01:58.775 Drivers Enabled 00:01:58.775 =============== 00:01:58.775 00:01:58.775 common: 00:01:58.775 00:01:58.775 bus: 00:01:58.775 pci, vdev, 00:01:58.775 mempool: 00:01:58.775 ring, 00:01:58.775 dma: 00:01:58.775 00:01:58.775 net: 00:01:58.775 00:01:58.775 crypto: 00:01:58.775 00:01:58.775 compress: 00:01:58.775 00:01:58.775 vdpa: 00:01:58.775 00:01:58.775 00:01:58.775 Message: 00:01:58.775 ================= 00:01:58.775 Content Skipped 00:01:58.775 ================= 00:01:58.775 00:01:58.775 apps: 00:01:58.775 dumpcap: explicitly disabled via build config 00:01:58.775 graph: explicitly disabled via build config 00:01:58.775 pdump: explicitly disabled via build config 00:01:58.775 proc-info: explicitly disabled via build config 00:01:58.775 test-acl: explicitly disabled via build config 00:01:58.775 test-bbdev: explicitly disabled via build config 00:01:58.775 test-cmdline: explicitly disabled via build config 00:01:58.775 test-compress-perf: explicitly disabled via build config 00:01:58.775 test-crypto-perf: explicitly disabled via build config 00:01:58.775 test-dma-perf: explicitly disabled via build config 00:01:58.775 test-eventdev: explicitly disabled via build config 00:01:58.775 test-fib: explicitly disabled via build config 00:01:58.775 test-flow-perf: explicitly disabled via build config 00:01:58.775 test-gpudev: explicitly disabled via build config 00:01:58.775 test-mldev: explicitly disabled via build config 00:01:58.775 test-pipeline: explicitly disabled via build config 00:01:58.775 test-pmd: explicitly disabled via build config 00:01:58.775 test-regex: explicitly disabled via build config 00:01:58.775 test-sad: explicitly disabled via build config 00:01:58.775 test-security-perf: explicitly disabled via build config 00:01:58.775 00:01:58.775 libs: 00:01:58.775 metrics: explicitly disabled via build config 00:01:58.775 acl: explicitly disabled via build config 00:01:58.775 bbdev: explicitly disabled via build config 00:01:58.775 bitratestats: explicitly disabled via build config 00:01:58.775 bpf: explicitly disabled via build config 00:01:58.775 cfgfile: explicitly disabled via build config 00:01:58.775 distributor: explicitly disabled via build config 00:01:58.775 efd: explicitly disabled via build config 00:01:58.775 eventdev: explicitly disabled via build config 00:01:58.775 dispatcher: explicitly disabled via build config 00:01:58.775 gpudev: explicitly disabled via build config 00:01:58.775 gro: explicitly disabled via build config 00:01:58.775 gso: explicitly disabled via build config 00:01:58.775 ip_frag: explicitly disabled via build config 00:01:58.775 jobstats: explicitly disabled via build config 00:01:58.775 latencystats: explicitly disabled via build config 00:01:58.775 lpm: explicitly disabled via build config 00:01:58.775 member: explicitly disabled via build config 00:01:58.775 pcapng: explicitly disabled via build config 00:01:58.775 rawdev: explicitly disabled via build config 00:01:58.775 regexdev: explicitly disabled via build config 00:01:58.775 mldev: explicitly disabled via build config 00:01:58.775 rib: explicitly disabled via build config 00:01:58.775 sched: explicitly disabled via build config 00:01:58.775 stack: explicitly disabled via build config 00:01:58.775 ipsec: explicitly disabled via build config 00:01:58.775 pdcp: explicitly disabled via build config 00:01:58.775 fib: explicitly disabled via build config 00:01:58.775 port: explicitly disabled via build config 00:01:58.775 pdump: explicitly disabled via build config 00:01:58.775 table: explicitly disabled via build config 00:01:58.775 pipeline: explicitly disabled via build config 00:01:58.775 graph: explicitly disabled via build config 00:01:58.775 node: explicitly disabled via build config 00:01:58.775 00:01:58.775 drivers: 00:01:58.775 common/cpt: not in enabled drivers build config 00:01:58.775 common/dpaax: not in enabled drivers build config 00:01:58.775 common/iavf: not in enabled drivers build config 00:01:58.775 common/idpf: not in enabled drivers build config 00:01:58.775 common/mvep: not in enabled drivers build config 00:01:58.775 common/octeontx: not in enabled drivers build config 00:01:58.775 bus/auxiliary: not in enabled drivers build config 00:01:58.775 bus/cdx: not in enabled drivers build config 00:01:58.775 bus/dpaa: not in enabled drivers build config 00:01:58.775 bus/fslmc: not in enabled drivers build config 00:01:58.775 bus/ifpga: not in enabled drivers build config 00:01:58.775 bus/platform: not in enabled drivers build config 00:01:58.775 bus/vmbus: not in enabled drivers build config 00:01:58.775 common/cnxk: not in enabled drivers build config 00:01:58.775 common/mlx5: not in enabled drivers build config 00:01:58.775 common/nfp: not in enabled drivers build config 00:01:58.775 common/qat: not in enabled drivers build config 00:01:58.775 common/sfc_efx: not in enabled drivers build config 00:01:58.775 mempool/bucket: not in enabled drivers build config 00:01:58.775 mempool/cnxk: not in enabled drivers build config 00:01:58.775 mempool/dpaa: not in enabled drivers build config 00:01:58.775 mempool/dpaa2: not in enabled drivers build config 00:01:58.775 mempool/octeontx: not in enabled drivers build config 00:01:58.775 mempool/stack: not in enabled drivers build config 00:01:58.775 dma/cnxk: not in enabled drivers build config 00:01:58.775 dma/dpaa: not in enabled drivers build config 00:01:58.775 dma/dpaa2: not in enabled drivers build config 00:01:58.775 dma/hisilicon: not in enabled drivers build config 00:01:58.775 dma/idxd: not in enabled drivers build config 00:01:58.775 dma/ioat: not in enabled drivers build config 00:01:58.775 dma/skeleton: not in enabled drivers build config 00:01:58.775 net/af_packet: not in enabled drivers build config 00:01:58.775 net/af_xdp: not in enabled drivers build config 00:01:58.775 net/ark: not in enabled drivers build config 00:01:58.775 net/atlantic: not in enabled drivers build config 00:01:58.775 net/avp: not in enabled drivers build config 00:01:58.775 net/axgbe: not in enabled drivers build config 00:01:58.775 net/bnx2x: not in enabled drivers build config 00:01:58.775 net/bnxt: not in enabled drivers build config 00:01:58.775 net/bonding: not in enabled drivers build config 00:01:58.775 net/cnxk: not in enabled drivers build config 00:01:58.775 net/cpfl: not in enabled drivers build config 00:01:58.775 net/cxgbe: not in enabled drivers build config 00:01:58.775 net/dpaa: not in enabled drivers build config 00:01:58.775 net/dpaa2: not in enabled drivers build config 00:01:58.775 net/e1000: not in enabled drivers build config 00:01:58.775 net/ena: not in enabled drivers build config 00:01:58.775 net/enetc: not in enabled drivers build config 00:01:58.775 net/enetfec: not in enabled drivers build config 00:01:58.775 net/enic: not in enabled drivers build config 00:01:58.775 net/failsafe: not in enabled drivers build config 00:01:58.775 net/fm10k: not in enabled drivers build config 00:01:58.775 net/gve: not in enabled drivers build config 00:01:58.775 net/hinic: not in enabled drivers build config 00:01:58.775 net/hns3: not in enabled drivers build config 00:01:58.775 net/i40e: not in enabled drivers build config 00:01:58.775 net/iavf: not in enabled drivers build config 00:01:58.775 net/ice: not in enabled drivers build config 00:01:58.775 net/idpf: not in enabled drivers build config 00:01:58.775 net/igc: not in enabled drivers build config 00:01:58.775 net/ionic: not in enabled drivers build config 00:01:58.775 net/ipn3ke: not in enabled drivers build config 00:01:58.775 net/ixgbe: not in enabled drivers build config 00:01:58.775 net/mana: not in enabled drivers build config 00:01:58.775 net/memif: not in enabled drivers build config 00:01:58.775 net/mlx4: not in enabled drivers build config 00:01:58.775 net/mlx5: not in enabled drivers build config 00:01:58.775 net/mvneta: not in enabled drivers build config 00:01:58.775 net/mvpp2: not in enabled drivers build config 00:01:58.775 net/netvsc: not in enabled drivers build config 00:01:58.775 net/nfb: not in enabled drivers build config 00:01:58.775 net/nfp: not in enabled drivers build config 00:01:58.775 net/ngbe: not in enabled drivers build config 00:01:58.775 net/null: not in enabled drivers build config 00:01:58.775 net/octeontx: not in enabled drivers build config 00:01:58.775 net/octeon_ep: not in enabled drivers build config 00:01:58.775 net/pcap: not in enabled drivers build config 00:01:58.775 net/pfe: not in enabled drivers build config 00:01:58.775 net/qede: not in enabled drivers build config 00:01:58.775 net/ring: not in enabled drivers build config 00:01:58.775 net/sfc: not in enabled drivers build config 00:01:58.775 net/softnic: not in enabled drivers build config 00:01:58.775 net/tap: not in enabled drivers build config 00:01:58.775 net/thunderx: not in enabled drivers build config 00:01:58.775 net/txgbe: not in enabled drivers build config 00:01:58.775 net/vdev_netvsc: not in enabled drivers build config 00:01:58.775 net/vhost: not in enabled drivers build config 00:01:58.775 net/virtio: not in enabled drivers build config 00:01:58.775 net/vmxnet3: not in enabled drivers build config 00:01:58.775 raw/*: missing internal dependency, "rawdev" 00:01:58.775 crypto/armv8: not in enabled drivers build config 00:01:58.775 crypto/bcmfs: not in enabled drivers build config 00:01:58.775 crypto/caam_jr: not in enabled drivers build config 00:01:58.775 crypto/ccp: not in enabled drivers build config 00:01:58.775 crypto/cnxk: not in enabled drivers build config 00:01:58.775 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.775 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.775 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.775 crypto/mlx5: not in enabled drivers build config 00:01:58.775 crypto/mvsam: not in enabled drivers build config 00:01:58.775 crypto/nitrox: not in enabled drivers build config 00:01:58.775 crypto/null: not in enabled drivers build config 00:01:58.775 crypto/octeontx: not in enabled drivers build config 00:01:58.775 crypto/openssl: not in enabled drivers build config 00:01:58.775 crypto/scheduler: not in enabled drivers build config 00:01:58.775 crypto/uadk: not in enabled drivers build config 00:01:58.776 crypto/virtio: not in enabled drivers build config 00:01:58.776 compress/isal: not in enabled drivers build config 00:01:58.776 compress/mlx5: not in enabled drivers build config 00:01:58.776 compress/octeontx: not in enabled drivers build config 00:01:58.776 compress/zlib: not in enabled drivers build config 00:01:58.776 regex/*: missing internal dependency, "regexdev" 00:01:58.776 ml/*: missing internal dependency, "mldev" 00:01:58.776 vdpa/ifc: not in enabled drivers build config 00:01:58.776 vdpa/mlx5: not in enabled drivers build config 00:01:58.776 vdpa/nfp: not in enabled drivers build config 00:01:58.776 vdpa/sfc: not in enabled drivers build config 00:01:58.776 event/*: missing internal dependency, "eventdev" 00:01:58.776 baseband/*: missing internal dependency, "bbdev" 00:01:58.776 gpu/*: missing internal dependency, "gpudev" 00:01:58.776 00:01:58.776 00:01:58.776 Build targets in project: 84 00:01:58.776 00:01:58.776 DPDK 23.11.0 00:01:58.776 00:01:58.776 User defined options 00:01:58.776 buildtype : debug 00:01:58.776 default_library : shared 00:01:58.776 libdir : lib 00:01:58.776 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.776 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:58.776 c_link_args : 00:01:58.776 cpu_instruction_set: native 00:01:58.776 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:58.776 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:58.776 enable_docs : false 00:01:58.776 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:58.776 enable_kmods : false 00:01:58.776 tests : false 00:01:58.776 00:01:58.776 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.776 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.776 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.776 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.776 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.776 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.776 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.776 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.776 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.776 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.776 [9/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.776 [10/264] Linking static target lib/librte_kvargs.a 00:01:58.776 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.776 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.776 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.776 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.776 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.776 [16/264] Linking static target lib/librte_log.a 00:01:58.776 [17/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.776 [18/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.776 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.776 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.776 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.776 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.776 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.776 [24/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.776 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.776 [26/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.776 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.776 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.776 [29/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.776 [30/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.776 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:59.035 [32/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.035 [33/264] Linking static target lib/librte_pci.a 00:01:59.035 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:59.035 [35/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.035 [36/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.035 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:59.035 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:59.035 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:59.035 [40/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.035 [41/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.035 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:59.035 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.036 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.036 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.036 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.036 [47/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:59.036 [48/264] Linking static target lib/librte_ring.a 00:01:59.036 [49/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:59.036 [50/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.036 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:59.036 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.036 [53/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.036 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.036 [55/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.036 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.297 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.297 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.297 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.297 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:59.297 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.297 [62/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:59.297 [63/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:59.297 [64/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:59.297 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.297 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.297 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:59.297 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.297 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.297 [70/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.297 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.297 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.297 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.297 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.297 [75/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:59.297 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.297 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.297 [78/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:59.297 [79/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.297 [80/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:59.297 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.297 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:59.297 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:59.297 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.297 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:59.297 [86/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.297 [87/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.297 [88/264] Linking static target lib/librte_telemetry.a 00:01:59.297 [89/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:59.297 [90/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:59.297 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:59.297 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:59.297 [93/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:59.297 [94/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:59.297 [95/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.297 [96/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.297 [97/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:59.297 [98/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:59.297 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:59.297 [100/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.297 [101/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:59.297 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.297 [103/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:59.297 [104/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:59.297 [105/264] Linking static target lib/librte_meter.a 00:01:59.297 [106/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:59.297 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:59.297 [108/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.297 [109/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:59.297 [110/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:59.297 [111/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:59.297 [112/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.297 [113/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.297 [114/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.297 [115/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:59.297 [116/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.297 [117/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:59.297 [118/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:59.297 [119/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:59.297 [120/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:59.297 [121/264] Linking static target lib/librte_timer.a 00:01:59.297 [122/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.297 [123/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:59.297 [124/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:59.297 [125/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.297 [126/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.297 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:59.297 [128/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:59.297 [129/264] Linking static target lib/librte_cmdline.a 00:01:59.297 [130/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.297 [131/264] Linking static target lib/librte_security.a 00:01:59.297 [132/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.297 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:59.297 [134/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.558 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.558 [136/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.558 [137/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.558 [138/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.558 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:59.558 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:59.558 [141/264] Linking target lib/librte_log.so.24.0 00:01:59.558 [142/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:59.558 [143/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.558 [144/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.558 [145/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.558 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:59.558 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.558 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:59.558 [149/264] Linking static target lib/librte_net.a 00:01:59.558 [150/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.558 [151/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.558 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.558 [153/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.558 [154/264] Linking static target lib/librte_rcu.a 00:01:59.558 [155/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.558 [156/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:59.558 [157/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.558 [158/264] Linking static target lib/librte_power.a 00:01:59.558 [159/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.558 [160/264] Linking static target lib/librte_mempool.a 00:01:59.558 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:59.558 [162/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:59.558 [163/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.558 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.558 [165/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.558 [166/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.558 [167/264] Linking static target lib/librte_reorder.a 00:01:59.558 [168/264] Linking static target lib/librte_eal.a 00:01:59.558 [169/264] Linking static target lib/librte_dmadev.a 00:01:59.558 [170/264] Linking static target lib/librte_compressdev.a 00:01:59.558 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.558 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.558 [173/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.558 [174/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.558 [175/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:59.558 [176/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.558 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.558 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.558 [179/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:59.558 [180/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.558 [181/264] Linking target lib/librte_kvargs.so.24.0 00:01:59.558 [182/264] Linking static target lib/librte_mbuf.a 00:01:59.558 [183/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.558 [184/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.558 [185/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.558 [186/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.558 [187/264] Linking static target drivers/librte_bus_vdev.a 00:01:59.558 [188/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.558 [189/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.558 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.558 [191/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.558 [192/264] Linking static target drivers/librte_bus_pci.a 00:01:59.818 [193/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.818 [194/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.818 [195/264] Linking static target lib/librte_hash.a 00:01:59.818 [196/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.818 [197/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.818 [198/264] Linking static target drivers/librte_mempool_ring.a 00:01:59.818 [199/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.818 [200/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:59.818 [201/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.818 [202/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.818 [203/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.818 [204/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.818 [205/264] Linking target lib/librte_telemetry.so.24.0 00:01:59.818 [206/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.818 [207/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.818 [208/264] Linking static target lib/librte_cryptodev.a 00:02:00.080 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.080 [210/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:00.080 [211/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.080 [212/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.080 [213/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [214/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [215/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.342 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:00.342 [217/264] Linking static target lib/librte_ethdev.a 00:02:00.342 [218/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [219/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.342 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.604 [221/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.604 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.604 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.177 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.177 [225/264] Linking static target lib/librte_vhost.a 00:02:02.122 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.511 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.101 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.044 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.044 [230/264] Linking target lib/librte_eal.so.24.0 00:02:11.044 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:11.044 [232/264] Linking target lib/librte_meter.so.24.0 00:02:11.044 [233/264] Linking target lib/librte_pci.so.24.0 00:02:11.044 [234/264] Linking target lib/librte_dmadev.so.24.0 00:02:11.044 [235/264] Linking target lib/librte_ring.so.24.0 00:02:11.044 [236/264] Linking target lib/librte_timer.so.24.0 00:02:11.044 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:11.305 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:11.305 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:11.305 [240/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:11.305 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:11.305 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:11.305 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:11.305 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:11.305 [245/264] Linking target lib/librte_rcu.so.24.0 00:02:11.305 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:11.305 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:11.566 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:11.566 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:11.566 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:11.566 [251/264] Linking target lib/librte_compressdev.so.24.0 00:02:11.566 [252/264] Linking target lib/librte_reorder.so.24.0 00:02:11.566 [253/264] Linking target lib/librte_net.so.24.0 00:02:11.566 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:11.828 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:11.828 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:11.828 [257/264] Linking target lib/librte_cmdline.so.24.0 00:02:11.828 [258/264] Linking target lib/librte_hash.so.24.0 00:02:11.828 [259/264] Linking target lib/librte_security.so.24.0 00:02:11.828 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:12.089 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:12.089 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:12.089 [263/264] Linking target lib/librte_power.so.24.0 00:02:12.089 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:12.089 INFO: autodetecting backend as ninja 00:02:12.089 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:13.032 CC lib/ut/ut.o 00:02:13.032 CC lib/log/log.o 00:02:13.032 CC lib/ut_mock/mock.o 00:02:13.032 CC lib/log/log_flags.o 00:02:13.032 CC lib/log/log_deprecated.o 00:02:13.294 LIB libspdk_ut.a 00:02:13.294 SO libspdk_ut.so.2.0 00:02:13.294 LIB libspdk_ut_mock.a 00:02:13.294 LIB libspdk_log.a 00:02:13.294 SO libspdk_ut_mock.so.6.0 00:02:13.294 SO libspdk_log.so.7.0 00:02:13.294 SYMLINK libspdk_ut.so 00:02:13.294 SYMLINK libspdk_ut_mock.so 00:02:13.555 SYMLINK libspdk_log.so 00:02:13.817 CC lib/util/base64.o 00:02:13.817 CC lib/util/bit_array.o 00:02:13.817 CC lib/util/cpuset.o 00:02:13.817 CC lib/util/crc16.o 00:02:13.817 CC lib/util/crc32.o 00:02:13.817 CC lib/util/crc32c.o 00:02:13.817 CC lib/util/crc32_ieee.o 00:02:13.817 CC lib/util/crc64.o 00:02:13.817 CC lib/util/dif.o 00:02:13.817 CC lib/ioat/ioat.o 00:02:13.817 CC lib/util/fd.o 00:02:13.817 CC lib/util/file.o 00:02:13.817 CC lib/util/hexlify.o 00:02:13.817 CC lib/dma/dma.o 00:02:13.817 CC lib/util/iov.o 00:02:13.817 CC lib/util/math.o 00:02:13.817 CC lib/util/pipe.o 00:02:13.817 CC lib/util/strerror_tls.o 00:02:13.817 CXX lib/trace_parser/trace.o 00:02:13.817 CC lib/util/string.o 00:02:13.817 CC lib/util/uuid.o 00:02:13.817 CC lib/util/fd_group.o 00:02:13.817 CC lib/util/xor.o 00:02:13.817 CC lib/util/zipf.o 00:02:14.079 CC lib/vfio_user/host/vfio_user_pci.o 00:02:14.079 CC lib/vfio_user/host/vfio_user.o 00:02:14.079 LIB libspdk_dma.a 00:02:14.079 SO libspdk_dma.so.4.0 00:02:14.079 LIB libspdk_ioat.a 00:02:14.079 SYMLINK libspdk_dma.so 00:02:14.079 SO libspdk_ioat.so.7.0 00:02:14.341 SYMLINK libspdk_ioat.so 00:02:14.341 LIB libspdk_vfio_user.a 00:02:14.341 SO libspdk_vfio_user.so.5.0 00:02:14.341 LIB libspdk_util.a 00:02:14.341 SYMLINK libspdk_vfio_user.so 00:02:14.341 SO libspdk_util.so.9.0 00:02:14.602 SYMLINK libspdk_util.so 00:02:14.602 LIB libspdk_trace_parser.a 00:02:14.602 SO libspdk_trace_parser.so.5.0 00:02:14.863 SYMLINK libspdk_trace_parser.so 00:02:14.863 CC lib/idxd/idxd.o 00:02:14.863 CC lib/idxd/idxd_user.o 00:02:14.863 CC lib/vmd/vmd.o 00:02:14.863 CC lib/vmd/led.o 00:02:14.863 CC lib/conf/conf.o 00:02:14.863 CC lib/rdma/common.o 00:02:14.863 CC lib/rdma/rdma_verbs.o 00:02:14.863 CC lib/json/json_parse.o 00:02:14.863 CC lib/json/json_util.o 00:02:14.863 CC lib/json/json_write.o 00:02:14.863 CC lib/env_dpdk/env.o 00:02:14.863 CC lib/env_dpdk/memory.o 00:02:14.863 CC lib/env_dpdk/pci.o 00:02:14.863 CC lib/env_dpdk/init.o 00:02:14.863 CC lib/env_dpdk/threads.o 00:02:14.863 CC lib/env_dpdk/pci_ioat.o 00:02:14.863 CC lib/env_dpdk/pci_virtio.o 00:02:14.863 CC lib/env_dpdk/pci_event.o 00:02:14.863 CC lib/env_dpdk/pci_vmd.o 00:02:14.863 CC lib/env_dpdk/pci_idxd.o 00:02:14.863 CC lib/env_dpdk/sigbus_handler.o 00:02:14.863 CC lib/env_dpdk/pci_dpdk.o 00:02:14.863 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:14.863 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:15.124 LIB libspdk_conf.a 00:02:15.124 SO libspdk_conf.so.6.0 00:02:15.124 LIB libspdk_rdma.a 00:02:15.124 LIB libspdk_json.a 00:02:15.124 SO libspdk_rdma.so.6.0 00:02:15.124 SO libspdk_json.so.6.0 00:02:15.124 SYMLINK libspdk_conf.so 00:02:15.386 SYMLINK libspdk_rdma.so 00:02:15.386 SYMLINK libspdk_json.so 00:02:15.386 LIB libspdk_idxd.a 00:02:15.386 SO libspdk_idxd.so.12.0 00:02:15.386 LIB libspdk_vmd.a 00:02:15.386 SYMLINK libspdk_idxd.so 00:02:15.386 SO libspdk_vmd.so.6.0 00:02:15.658 SYMLINK libspdk_vmd.so 00:02:15.658 CC lib/jsonrpc/jsonrpc_server.o 00:02:15.658 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:15.658 CC lib/jsonrpc/jsonrpc_client.o 00:02:15.658 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:15.920 LIB libspdk_jsonrpc.a 00:02:15.920 SO libspdk_jsonrpc.so.6.0 00:02:15.920 SYMLINK libspdk_jsonrpc.so 00:02:16.181 LIB libspdk_env_dpdk.a 00:02:16.181 SO libspdk_env_dpdk.so.14.0 00:02:16.181 CC lib/rpc/rpc.o 00:02:16.442 SYMLINK libspdk_env_dpdk.so 00:02:16.442 LIB libspdk_rpc.a 00:02:16.442 SO libspdk_rpc.so.6.0 00:02:16.702 SYMLINK libspdk_rpc.so 00:02:16.964 CC lib/keyring/keyring_rpc.o 00:02:16.964 CC lib/keyring/keyring.o 00:02:16.964 CC lib/trace/trace.o 00:02:16.964 CC lib/trace/trace_flags.o 00:02:16.964 CC lib/trace/trace_rpc.o 00:02:16.964 CC lib/notify/notify.o 00:02:16.964 CC lib/notify/notify_rpc.o 00:02:17.225 LIB libspdk_notify.a 00:02:17.225 LIB libspdk_trace.a 00:02:17.225 SO libspdk_notify.so.6.0 00:02:17.225 LIB libspdk_keyring.a 00:02:17.225 SO libspdk_trace.so.10.0 00:02:17.225 SO libspdk_keyring.so.1.0 00:02:17.225 SYMLINK libspdk_notify.so 00:02:17.225 SYMLINK libspdk_trace.so 00:02:17.225 SYMLINK libspdk_keyring.so 00:02:17.798 CC lib/sock/sock.o 00:02:17.798 CC lib/sock/sock_rpc.o 00:02:17.798 CC lib/thread/thread.o 00:02:17.798 CC lib/thread/iobuf.o 00:02:18.059 LIB libspdk_sock.a 00:02:18.059 SO libspdk_sock.so.9.0 00:02:18.059 SYMLINK libspdk_sock.so 00:02:18.321 CC lib/nvme/nvme_ctrlr.o 00:02:18.321 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.321 CC lib/nvme/nvme_fabric.o 00:02:18.321 CC lib/nvme/nvme_ns_cmd.o 00:02:18.321 CC lib/nvme/nvme_ns.o 00:02:18.321 CC lib/nvme/nvme_pcie_common.o 00:02:18.321 CC lib/nvme/nvme_pcie.o 00:02:18.321 CC lib/nvme/nvme_qpair.o 00:02:18.321 CC lib/nvme/nvme.o 00:02:18.321 CC lib/nvme/nvme_quirks.o 00:02:18.321 CC lib/nvme/nvme_transport.o 00:02:18.321 CC lib/nvme/nvme_discovery.o 00:02:18.321 CC lib/nvme/nvme_tcp.o 00:02:18.321 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.321 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.321 CC lib/nvme/nvme_opal.o 00:02:18.321 CC lib/nvme/nvme_io_msg.o 00:02:18.321 CC lib/nvme/nvme_poll_group.o 00:02:18.321 CC lib/nvme/nvme_zns.o 00:02:18.321 CC lib/nvme/nvme_stubs.o 00:02:18.321 CC lib/nvme/nvme_auth.o 00:02:18.321 CC lib/nvme/nvme_cuse.o 00:02:18.583 CC lib/nvme/nvme_vfio_user.o 00:02:18.583 CC lib/nvme/nvme_rdma.o 00:02:18.844 LIB libspdk_thread.a 00:02:18.844 SO libspdk_thread.so.10.0 00:02:19.104 SYMLINK libspdk_thread.so 00:02:19.366 CC lib/vfu_tgt/tgt_endpoint.o 00:02:19.366 CC lib/vfu_tgt/tgt_rpc.o 00:02:19.366 CC lib/blob/blobstore.o 00:02:19.366 CC lib/blob/request.o 00:02:19.366 CC lib/blob/zeroes.o 00:02:19.366 CC lib/init/json_config.o 00:02:19.366 CC lib/blob/blob_bs_dev.o 00:02:19.366 CC lib/init/subsystem.o 00:02:19.366 CC lib/init/subsystem_rpc.o 00:02:19.366 CC lib/init/rpc.o 00:02:19.366 CC lib/accel/accel.o 00:02:19.366 CC lib/virtio/virtio.o 00:02:19.366 CC lib/accel/accel_rpc.o 00:02:19.366 CC lib/virtio/virtio_vhost_user.o 00:02:19.366 CC lib/accel/accel_sw.o 00:02:19.366 CC lib/virtio/virtio_vfio_user.o 00:02:19.366 CC lib/virtio/virtio_pci.o 00:02:19.627 LIB libspdk_init.a 00:02:19.627 SO libspdk_init.so.5.0 00:02:19.627 LIB libspdk_vfu_tgt.a 00:02:19.627 SO libspdk_vfu_tgt.so.3.0 00:02:19.627 LIB libspdk_virtio.a 00:02:19.627 SYMLINK libspdk_init.so 00:02:19.627 SO libspdk_virtio.so.7.0 00:02:19.627 SYMLINK libspdk_vfu_tgt.so 00:02:19.888 SYMLINK libspdk_virtio.so 00:02:20.149 CC lib/event/app.o 00:02:20.149 CC lib/event/reactor.o 00:02:20.149 CC lib/event/log_rpc.o 00:02:20.149 CC lib/event/app_rpc.o 00:02:20.149 CC lib/event/scheduler_static.o 00:02:20.149 LIB libspdk_accel.a 00:02:20.149 LIB libspdk_nvme.a 00:02:20.149 SO libspdk_accel.so.15.0 00:02:20.411 SYMLINK libspdk_accel.so 00:02:20.411 SO libspdk_nvme.so.13.0 00:02:20.411 LIB libspdk_event.a 00:02:20.411 SO libspdk_event.so.13.0 00:02:20.672 SYMLINK libspdk_event.so 00:02:20.672 SYMLINK libspdk_nvme.so 00:02:20.672 CC lib/bdev/bdev.o 00:02:20.672 CC lib/bdev/bdev_zone.o 00:02:20.672 CC lib/bdev/bdev_rpc.o 00:02:20.672 CC lib/bdev/part.o 00:02:20.672 CC lib/bdev/scsi_nvme.o 00:02:21.614 LIB libspdk_blob.a 00:02:21.876 SO libspdk_blob.so.11.0 00:02:21.876 SYMLINK libspdk_blob.so 00:02:22.137 CC lib/lvol/lvol.o 00:02:22.137 CC lib/blobfs/blobfs.o 00:02:22.137 CC lib/blobfs/tree.o 00:02:23.082 LIB libspdk_bdev.a 00:02:23.082 LIB libspdk_blobfs.a 00:02:23.082 LIB libspdk_lvol.a 00:02:23.082 SO libspdk_bdev.so.15.0 00:02:23.082 SO libspdk_blobfs.so.10.0 00:02:23.082 SO libspdk_lvol.so.10.0 00:02:23.082 SYMLINK libspdk_blobfs.so 00:02:23.082 SYMLINK libspdk_bdev.so 00:02:23.082 SYMLINK libspdk_lvol.so 00:02:23.343 CC lib/nvmf/ctrlr.o 00:02:23.343 CC lib/nvmf/ctrlr_bdev.o 00:02:23.343 CC lib/nvmf/ctrlr_discovery.o 00:02:23.343 CC lib/nbd/nbd.o 00:02:23.343 CC lib/nvmf/subsystem.o 00:02:23.343 CC lib/nvmf/nvmf.o 00:02:23.343 CC lib/nbd/nbd_rpc.o 00:02:23.343 CC lib/nvmf/tcp.o 00:02:23.343 CC lib/nvmf/nvmf_rpc.o 00:02:23.344 CC lib/nvmf/transport.o 00:02:23.344 CC lib/nvmf/vfio_user.o 00:02:23.344 CC lib/nvmf/rdma.o 00:02:23.344 CC lib/scsi/dev.o 00:02:23.344 CC lib/ublk/ublk.o 00:02:23.344 CC lib/ublk/ublk_rpc.o 00:02:23.344 CC lib/scsi/lun.o 00:02:23.344 CC lib/ftl/ftl_core.o 00:02:23.344 CC lib/scsi/port.o 00:02:23.344 CC lib/ftl/ftl_init.o 00:02:23.344 CC lib/scsi/scsi.o 00:02:23.344 CC lib/scsi/scsi_bdev.o 00:02:23.344 CC lib/ftl/ftl_layout.o 00:02:23.344 CC lib/scsi/scsi_pr.o 00:02:23.344 CC lib/ftl/ftl_debug.o 00:02:23.344 CC lib/scsi/scsi_rpc.o 00:02:23.344 CC lib/ftl/ftl_io.o 00:02:23.344 CC lib/scsi/task.o 00:02:23.344 CC lib/ftl/ftl_sb.o 00:02:23.344 CC lib/ftl/ftl_l2p.o 00:02:23.344 CC lib/ftl/ftl_l2p_flat.o 00:02:23.344 CC lib/ftl/ftl_band_ops.o 00:02:23.344 CC lib/ftl/ftl_nv_cache.o 00:02:23.344 CC lib/ftl/ftl_band.o 00:02:23.344 CC lib/ftl/ftl_writer.o 00:02:23.344 CC lib/ftl/ftl_rq.o 00:02:23.344 CC lib/ftl/ftl_reloc.o 00:02:23.344 CC lib/ftl/ftl_l2p_cache.o 00:02:23.344 CC lib/ftl/ftl_p2l.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.344 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.344 CC lib/ftl/utils/ftl_conf.o 00:02:23.602 CC lib/ftl/utils/ftl_md.o 00:02:23.602 CC lib/ftl/utils/ftl_mempool.o 00:02:23.602 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.602 CC lib/ftl/utils/ftl_property.o 00:02:23.602 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.602 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.602 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.602 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.602 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.602 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.602 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.602 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.602 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.602 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.602 CC lib/ftl/base/ftl_base_dev.o 00:02:23.602 CC lib/ftl/ftl_trace.o 00:02:23.602 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.861 LIB libspdk_nbd.a 00:02:23.861 SO libspdk_nbd.so.7.0 00:02:24.123 SYMLINK libspdk_nbd.so 00:02:24.123 LIB libspdk_scsi.a 00:02:24.123 SO libspdk_scsi.so.9.0 00:02:24.123 LIB libspdk_ublk.a 00:02:24.123 SO libspdk_ublk.so.3.0 00:02:24.123 SYMLINK libspdk_scsi.so 00:02:24.123 SYMLINK libspdk_ublk.so 00:02:24.383 LIB libspdk_ftl.a 00:02:24.383 CC lib/iscsi/conn.o 00:02:24.383 CC lib/iscsi/iscsi.o 00:02:24.383 CC lib/iscsi/init_grp.o 00:02:24.383 CC lib/iscsi/param.o 00:02:24.383 CC lib/iscsi/md5.o 00:02:24.383 CC lib/iscsi/portal_grp.o 00:02:24.383 CC lib/iscsi/tgt_node.o 00:02:24.383 CC lib/iscsi/task.o 00:02:24.383 CC lib/iscsi/iscsi_subsystem.o 00:02:24.383 CC lib/iscsi/iscsi_rpc.o 00:02:24.383 CC lib/vhost/vhost.o 00:02:24.383 CC lib/vhost/vhost_scsi.o 00:02:24.383 CC lib/vhost/vhost_rpc.o 00:02:24.383 CC lib/vhost/vhost_blk.o 00:02:24.383 SO libspdk_ftl.so.9.0 00:02:24.383 CC lib/vhost/rte_vhost_user.o 00:02:24.955 SYMLINK libspdk_ftl.so 00:02:25.216 LIB libspdk_nvmf.a 00:02:25.216 SO libspdk_nvmf.so.18.0 00:02:25.477 SYMLINK libspdk_nvmf.so 00:02:25.477 LIB libspdk_vhost.a 00:02:25.477 SO libspdk_vhost.so.8.0 00:02:25.477 SYMLINK libspdk_vhost.so 00:02:25.738 LIB libspdk_iscsi.a 00:02:25.738 SO libspdk_iscsi.so.8.0 00:02:25.738 SYMLINK libspdk_iscsi.so 00:02:26.310 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.310 CC module/vfu_device/vfu_virtio.o 00:02:26.310 CC module/vfu_device/vfu_virtio_blk.o 00:02:26.310 CC module/vfu_device/vfu_virtio_scsi.o 00:02:26.310 CC module/vfu_device/vfu_virtio_rpc.o 00:02:26.570 CC module/accel/dsa/accel_dsa.o 00:02:26.570 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.570 LIB libspdk_env_dpdk_rpc.a 00:02:26.570 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.570 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.570 CC module/blob/bdev/blob_bdev.o 00:02:26.570 CC module/sock/posix/posix.o 00:02:26.570 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.570 CC module/accel/error/accel_error.o 00:02:26.570 CC module/accel/error/accel_error_rpc.o 00:02:26.570 SO libspdk_env_dpdk_rpc.so.6.0 00:02:26.570 CC module/keyring/file/keyring.o 00:02:26.570 CC module/accel/ioat/accel_ioat.o 00:02:26.570 CC module/keyring/file/keyring_rpc.o 00:02:26.570 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.570 CC module/accel/iaa/accel_iaa.o 00:02:26.570 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.570 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.570 LIB libspdk_scheduler_gscheduler.a 00:02:26.570 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.570 LIB libspdk_scheduler_dynamic.a 00:02:26.570 LIB libspdk_accel_dsa.a 00:02:26.570 SO libspdk_scheduler_gscheduler.so.4.0 00:02:26.570 LIB libspdk_accel_error.a 00:02:26.570 LIB libspdk_keyring_file.a 00:02:26.570 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:26.570 LIB libspdk_accel_ioat.a 00:02:26.830 SO libspdk_scheduler_dynamic.so.4.0 00:02:26.830 SO libspdk_accel_dsa.so.5.0 00:02:26.830 SO libspdk_accel_error.so.2.0 00:02:26.830 SO libspdk_accel_ioat.so.6.0 00:02:26.830 SO libspdk_keyring_file.so.1.0 00:02:26.830 LIB libspdk_accel_iaa.a 00:02:26.830 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.830 LIB libspdk_blob_bdev.a 00:02:26.830 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.830 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.830 SO libspdk_accel_iaa.so.3.0 00:02:26.830 SO libspdk_blob_bdev.so.11.0 00:02:26.830 SYMLINK libspdk_accel_dsa.so 00:02:26.830 SYMLINK libspdk_accel_error.so 00:02:26.830 SYMLINK libspdk_accel_ioat.so 00:02:26.830 SYMLINK libspdk_keyring_file.so 00:02:26.830 SYMLINK libspdk_accel_iaa.so 00:02:26.830 SYMLINK libspdk_blob_bdev.so 00:02:26.830 LIB libspdk_vfu_device.a 00:02:26.830 SO libspdk_vfu_device.so.3.0 00:02:26.830 SYMLINK libspdk_vfu_device.so 00:02:27.091 LIB libspdk_sock_posix.a 00:02:27.091 SO libspdk_sock_posix.so.6.0 00:02:27.351 SYMLINK libspdk_sock_posix.so 00:02:27.351 CC module/bdev/error/vbdev_error.o 00:02:27.351 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.351 CC module/bdev/gpt/gpt.o 00:02:27.351 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.351 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.351 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.351 CC module/bdev/malloc/bdev_malloc.o 00:02:27.351 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.351 CC module/bdev/ftl/bdev_ftl.o 00:02:27.351 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.351 CC module/bdev/nvme/bdev_nvme.o 00:02:27.351 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.351 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.351 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.351 CC module/bdev/nvme/vbdev_opal.o 00:02:27.351 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.351 CC module/bdev/nvme/nvme_rpc.o 00:02:27.351 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.351 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.351 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.351 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.351 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.351 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.351 CC module/bdev/delay/vbdev_delay.o 00:02:27.351 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.351 CC module/bdev/aio/bdev_aio.o 00:02:27.351 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.351 CC module/bdev/raid/bdev_raid.o 00:02:27.351 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.351 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.351 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.351 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.351 CC module/bdev/raid/raid1.o 00:02:27.351 CC module/bdev/null/bdev_null.o 00:02:27.351 CC module/bdev/raid/raid0.o 00:02:27.351 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.351 CC module/bdev/raid/concat.o 00:02:27.351 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.351 CC module/bdev/null/bdev_null_rpc.o 00:02:27.351 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.351 CC module/bdev/split/vbdev_split.o 00:02:27.351 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.610 LIB libspdk_blobfs_bdev.a 00:02:27.610 LIB libspdk_bdev_error.a 00:02:27.610 LIB libspdk_bdev_gpt.a 00:02:27.610 LIB libspdk_bdev_split.a 00:02:27.610 SO libspdk_blobfs_bdev.so.6.0 00:02:27.610 LIB libspdk_bdev_ftl.a 00:02:27.610 LIB libspdk_bdev_null.a 00:02:27.610 SO libspdk_bdev_split.so.6.0 00:02:27.610 LIB libspdk_bdev_passthru.a 00:02:27.610 SO libspdk_bdev_gpt.so.6.0 00:02:27.610 SO libspdk_bdev_error.so.6.0 00:02:27.610 SO libspdk_bdev_null.so.6.0 00:02:27.610 SO libspdk_bdev_ftl.so.6.0 00:02:27.610 LIB libspdk_bdev_zone_block.a 00:02:27.869 SYMLINK libspdk_blobfs_bdev.so 00:02:27.869 LIB libspdk_bdev_aio.a 00:02:27.869 SO libspdk_bdev_passthru.so.6.0 00:02:27.869 SYMLINK libspdk_bdev_split.so 00:02:27.869 SO libspdk_bdev_zone_block.so.6.0 00:02:27.869 SYMLINK libspdk_bdev_gpt.so 00:02:27.869 LIB libspdk_bdev_malloc.a 00:02:27.869 SYMLINK libspdk_bdev_null.so 00:02:27.869 LIB libspdk_bdev_delay.a 00:02:27.869 SYMLINK libspdk_bdev_error.so 00:02:27.869 SO libspdk_bdev_aio.so.6.0 00:02:27.869 SYMLINK libspdk_bdev_ftl.so 00:02:27.869 LIB libspdk_bdev_iscsi.a 00:02:27.869 SO libspdk_bdev_malloc.so.6.0 00:02:27.869 SO libspdk_bdev_delay.so.6.0 00:02:27.869 SYMLINK libspdk_bdev_passthru.so 00:02:27.869 SO libspdk_bdev_iscsi.so.6.0 00:02:27.869 SYMLINK libspdk_bdev_zone_block.so 00:02:27.869 SYMLINK libspdk_bdev_delay.so 00:02:27.869 SYMLINK libspdk_bdev_aio.so 00:02:27.869 LIB libspdk_bdev_lvol.a 00:02:27.869 SYMLINK libspdk_bdev_malloc.so 00:02:27.869 SYMLINK libspdk_bdev_iscsi.so 00:02:27.869 SO libspdk_bdev_lvol.so.6.0 00:02:27.869 LIB libspdk_bdev_virtio.a 00:02:27.869 SYMLINK libspdk_bdev_lvol.so 00:02:28.131 SO libspdk_bdev_virtio.so.6.0 00:02:28.131 SYMLINK libspdk_bdev_virtio.so 00:02:28.131 LIB libspdk_bdev_raid.a 00:02:28.392 SO libspdk_bdev_raid.so.6.0 00:02:28.392 SYMLINK libspdk_bdev_raid.so 00:02:29.423 LIB libspdk_bdev_nvme.a 00:02:29.423 SO libspdk_bdev_nvme.so.7.0 00:02:29.423 SYMLINK libspdk_bdev_nvme.so 00:02:29.993 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.993 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.993 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.993 CC module/event/subsystems/keyring/keyring.o 00:02:29.993 CC module/event/subsystems/vmd/vmd.o 00:02:29.993 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.993 CC module/event/subsystems/sock/sock.o 00:02:29.993 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.993 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:30.253 LIB libspdk_event_scheduler.a 00:02:30.253 LIB libspdk_event_keyring.a 00:02:30.253 LIB libspdk_event_vmd.a 00:02:30.253 LIB libspdk_event_vhost_blk.a 00:02:30.253 SO libspdk_event_scheduler.so.4.0 00:02:30.253 LIB libspdk_event_sock.a 00:02:30.253 SO libspdk_event_vmd.so.6.0 00:02:30.253 SO libspdk_event_keyring.so.1.0 00:02:30.253 LIB libspdk_event_vfu_tgt.a 00:02:30.253 LIB libspdk_event_iobuf.a 00:02:30.253 SO libspdk_event_vhost_blk.so.3.0 00:02:30.253 SO libspdk_event_sock.so.5.0 00:02:30.253 SO libspdk_event_vfu_tgt.so.3.0 00:02:30.254 SYMLINK libspdk_event_scheduler.so 00:02:30.254 SO libspdk_event_iobuf.so.3.0 00:02:30.254 SYMLINK libspdk_event_keyring.so 00:02:30.254 SYMLINK libspdk_event_vmd.so 00:02:30.254 SYMLINK libspdk_event_vhost_blk.so 00:02:30.254 SYMLINK libspdk_event_sock.so 00:02:30.254 SYMLINK libspdk_event_vfu_tgt.so 00:02:30.254 SYMLINK libspdk_event_iobuf.so 00:02:30.826 CC module/event/subsystems/accel/accel.o 00:02:30.826 LIB libspdk_event_accel.a 00:02:30.826 SO libspdk_event_accel.so.6.0 00:02:31.087 SYMLINK libspdk_event_accel.so 00:02:31.347 CC module/event/subsystems/bdev/bdev.o 00:02:31.608 LIB libspdk_event_bdev.a 00:02:31.608 SO libspdk_event_bdev.so.6.0 00:02:31.608 SYMLINK libspdk_event_bdev.so 00:02:31.870 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.870 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.870 CC module/event/subsystems/ublk/ublk.o 00:02:31.870 CC module/event/subsystems/scsi/scsi.o 00:02:31.870 CC module/event/subsystems/nbd/nbd.o 00:02:32.132 LIB libspdk_event_ublk.a 00:02:32.132 LIB libspdk_event_nbd.a 00:02:32.132 LIB libspdk_event_scsi.a 00:02:32.132 SO libspdk_event_ublk.so.3.0 00:02:32.132 SO libspdk_event_nbd.so.6.0 00:02:32.132 LIB libspdk_event_nvmf.a 00:02:32.132 SO libspdk_event_scsi.so.6.0 00:02:32.132 SO libspdk_event_nvmf.so.6.0 00:02:32.132 SYMLINK libspdk_event_ublk.so 00:02:32.132 SYMLINK libspdk_event_nbd.so 00:02:32.132 SYMLINK libspdk_event_scsi.so 00:02:32.393 SYMLINK libspdk_event_nvmf.so 00:02:32.654 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.654 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.654 LIB libspdk_event_vhost_scsi.a 00:02:32.915 LIB libspdk_event_iscsi.a 00:02:32.915 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.915 SO libspdk_event_iscsi.so.6.0 00:02:32.915 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.915 SYMLINK libspdk_event_iscsi.so 00:02:33.175 SO libspdk.so.6.0 00:02:33.175 SYMLINK libspdk.so 00:02:33.436 TEST_HEADER include/spdk/accel.h 00:02:33.436 TEST_HEADER include/spdk/accel_module.h 00:02:33.436 TEST_HEADER include/spdk/barrier.h 00:02:33.436 TEST_HEADER include/spdk/base64.h 00:02:33.436 TEST_HEADER include/spdk/assert.h 00:02:33.436 TEST_HEADER include/spdk/bdev.h 00:02:33.436 TEST_HEADER include/spdk/bdev_zone.h 00:02:33.436 TEST_HEADER include/spdk/bdev_module.h 00:02:33.436 TEST_HEADER include/spdk/bit_pool.h 00:02:33.436 TEST_HEADER include/spdk/bit_array.h 00:02:33.436 CC test/rpc_client/rpc_client_test.o 00:02:33.436 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:33.436 TEST_HEADER include/spdk/blob_bdev.h 00:02:33.436 TEST_HEADER include/spdk/blobfs.h 00:02:33.436 TEST_HEADER include/spdk/blob.h 00:02:33.436 TEST_HEADER include/spdk/config.h 00:02:33.436 TEST_HEADER include/spdk/conf.h 00:02:33.436 TEST_HEADER include/spdk/crc32.h 00:02:33.436 TEST_HEADER include/spdk/cpuset.h 00:02:33.436 TEST_HEADER include/spdk/dif.h 00:02:33.436 TEST_HEADER include/spdk/crc16.h 00:02:33.436 CC app/trace_record/trace_record.o 00:02:33.436 TEST_HEADER include/spdk/crc64.h 00:02:33.436 TEST_HEADER include/spdk/env_dpdk.h 00:02:33.436 TEST_HEADER include/spdk/dma.h 00:02:33.436 TEST_HEADER include/spdk/endian.h 00:02:33.436 TEST_HEADER include/spdk/fd_group.h 00:02:33.436 TEST_HEADER include/spdk/env.h 00:02:33.436 CC app/spdk_lspci/spdk_lspci.o 00:02:33.436 TEST_HEADER include/spdk/event.h 00:02:33.436 CC app/spdk_top/spdk_top.o 00:02:33.436 TEST_HEADER include/spdk/fd.h 00:02:33.436 TEST_HEADER include/spdk/file.h 00:02:33.436 TEST_HEADER include/spdk/ftl.h 00:02:33.436 CXX app/trace/trace.o 00:02:33.436 CC app/spdk_nvme_identify/identify.o 00:02:33.436 TEST_HEADER include/spdk/hexlify.h 00:02:33.436 TEST_HEADER include/spdk/histogram_data.h 00:02:33.436 TEST_HEADER include/spdk/gpt_spec.h 00:02:33.436 TEST_HEADER include/spdk/idxd.h 00:02:33.436 TEST_HEADER include/spdk/init.h 00:02:33.436 TEST_HEADER include/spdk/idxd_spec.h 00:02:33.436 TEST_HEADER include/spdk/ioat.h 00:02:33.436 TEST_HEADER include/spdk/iscsi_spec.h 00:02:33.436 CC app/spdk_nvme_discover/discovery_aer.o 00:02:33.436 TEST_HEADER include/spdk/ioat_spec.h 00:02:33.436 TEST_HEADER include/spdk/json.h 00:02:33.436 CC app/spdk_nvme_perf/perf.o 00:02:33.436 TEST_HEADER include/spdk/keyring_module.h 00:02:33.436 TEST_HEADER include/spdk/jsonrpc.h 00:02:33.436 TEST_HEADER include/spdk/keyring.h 00:02:33.436 TEST_HEADER include/spdk/likely.h 00:02:33.436 TEST_HEADER include/spdk/memory.h 00:02:33.436 TEST_HEADER include/spdk/log.h 00:02:33.436 CC app/iscsi_tgt/iscsi_tgt.o 00:02:33.436 TEST_HEADER include/spdk/lvol.h 00:02:33.436 TEST_HEADER include/spdk/mmio.h 00:02:33.436 TEST_HEADER include/spdk/nbd.h 00:02:33.436 TEST_HEADER include/spdk/notify.h 00:02:33.436 TEST_HEADER include/spdk/nvme_intel.h 00:02:33.436 TEST_HEADER include/spdk/nvme.h 00:02:33.436 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:33.436 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:33.436 TEST_HEADER include/spdk/nvme_zns.h 00:02:33.436 TEST_HEADER include/spdk/nvme_spec.h 00:02:33.436 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:33.436 CC app/spdk_tgt/spdk_tgt.o 00:02:33.436 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:33.436 TEST_HEADER include/spdk/nvmf.h 00:02:33.436 TEST_HEADER include/spdk/nvmf_transport.h 00:02:33.436 TEST_HEADER include/spdk/nvmf_spec.h 00:02:33.436 TEST_HEADER include/spdk/opal.h 00:02:33.436 TEST_HEADER include/spdk/opal_spec.h 00:02:33.436 TEST_HEADER include/spdk/pci_ids.h 00:02:33.436 TEST_HEADER include/spdk/pipe.h 00:02:33.436 TEST_HEADER include/spdk/queue.h 00:02:33.436 CC app/nvmf_tgt/nvmf_main.o 00:02:33.436 TEST_HEADER include/spdk/reduce.h 00:02:33.436 TEST_HEADER include/spdk/rpc.h 00:02:33.436 TEST_HEADER include/spdk/scheduler.h 00:02:33.436 CC app/spdk_dd/spdk_dd.o 00:02:33.436 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:33.436 TEST_HEADER include/spdk/sock.h 00:02:33.436 TEST_HEADER include/spdk/scsi.h 00:02:33.436 TEST_HEADER include/spdk/stdinc.h 00:02:33.436 TEST_HEADER include/spdk/scsi_spec.h 00:02:33.436 TEST_HEADER include/spdk/string.h 00:02:33.436 TEST_HEADER include/spdk/thread.h 00:02:33.436 TEST_HEADER include/spdk/trace.h 00:02:33.436 TEST_HEADER include/spdk/tree.h 00:02:33.436 TEST_HEADER include/spdk/trace_parser.h 00:02:33.701 TEST_HEADER include/spdk/ublk.h 00:02:33.701 TEST_HEADER include/spdk/util.h 00:02:33.701 CC app/vhost/vhost.o 00:02:33.701 TEST_HEADER include/spdk/uuid.h 00:02:33.701 TEST_HEADER include/spdk/version.h 00:02:33.701 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:33.701 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:33.701 TEST_HEADER include/spdk/vmd.h 00:02:33.701 TEST_HEADER include/spdk/xor.h 00:02:33.701 TEST_HEADER include/spdk/zipf.h 00:02:33.701 TEST_HEADER include/spdk/vhost.h 00:02:33.701 CXX test/cpp_headers/accel.o 00:02:33.701 CXX test/cpp_headers/accel_module.o 00:02:33.701 CXX test/cpp_headers/assert.o 00:02:33.701 CXX test/cpp_headers/barrier.o 00:02:33.701 CXX test/cpp_headers/bdev.o 00:02:33.701 CXX test/cpp_headers/base64.o 00:02:33.701 CXX test/cpp_headers/bit_array.o 00:02:33.701 CXX test/cpp_headers/bdev_module.o 00:02:33.701 CXX test/cpp_headers/bdev_zone.o 00:02:33.701 CXX test/cpp_headers/blob_bdev.o 00:02:33.701 CXX test/cpp_headers/bit_pool.o 00:02:33.701 CXX test/cpp_headers/blobfs_bdev.o 00:02:33.701 CXX test/cpp_headers/blob.o 00:02:33.701 CXX test/cpp_headers/blobfs.o 00:02:33.701 CXX test/cpp_headers/conf.o 00:02:33.701 CXX test/cpp_headers/crc16.o 00:02:33.701 CXX test/cpp_headers/config.o 00:02:33.701 CXX test/cpp_headers/cpuset.o 00:02:33.701 CXX test/cpp_headers/crc64.o 00:02:33.701 CXX test/cpp_headers/crc32.o 00:02:33.701 CXX test/cpp_headers/dif.o 00:02:33.701 CXX test/cpp_headers/dma.o 00:02:33.701 CXX test/cpp_headers/endian.o 00:02:33.701 CXX test/cpp_headers/env_dpdk.o 00:02:33.701 CXX test/cpp_headers/env.o 00:02:33.701 CXX test/cpp_headers/event.o 00:02:33.701 CXX test/cpp_headers/file.o 00:02:33.701 CXX test/cpp_headers/gpt_spec.o 00:02:33.701 CXX test/cpp_headers/fd_group.o 00:02:33.701 CXX test/cpp_headers/fd.o 00:02:33.701 CXX test/cpp_headers/hexlify.o 00:02:33.701 CXX test/cpp_headers/ftl.o 00:02:33.701 CXX test/cpp_headers/histogram_data.o 00:02:33.701 CXX test/cpp_headers/idxd_spec.o 00:02:33.701 CXX test/cpp_headers/idxd.o 00:02:33.701 CXX test/cpp_headers/ioat.o 00:02:33.701 CXX test/cpp_headers/init.o 00:02:33.701 CXX test/cpp_headers/json.o 00:02:33.701 CXX test/cpp_headers/ioat_spec.o 00:02:33.701 CXX test/cpp_headers/jsonrpc.o 00:02:33.701 CXX test/cpp_headers/iscsi_spec.o 00:02:33.701 CXX test/cpp_headers/keyring.o 00:02:33.701 CXX test/cpp_headers/likely.o 00:02:33.701 CXX test/cpp_headers/keyring_module.o 00:02:33.701 CXX test/cpp_headers/log.o 00:02:33.701 CXX test/cpp_headers/lvol.o 00:02:33.701 CXX test/cpp_headers/mmio.o 00:02:33.701 CXX test/cpp_headers/memory.o 00:02:33.701 CXX test/cpp_headers/nbd.o 00:02:33.701 CXX test/cpp_headers/notify.o 00:02:33.701 CXX test/cpp_headers/nvme_intel.o 00:02:33.701 CXX test/cpp_headers/nvme.o 00:02:33.701 CXX test/cpp_headers/nvme_ocssd.o 00:02:33.701 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:33.701 CXX test/cpp_headers/nvme_spec.o 00:02:33.701 CXX test/cpp_headers/nvme_zns.o 00:02:33.701 CXX test/cpp_headers/nvmf_cmd.o 00:02:33.701 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:33.701 CXX test/cpp_headers/nvmf.o 00:02:33.701 CXX test/cpp_headers/nvmf_transport.o 00:02:33.701 CXX test/cpp_headers/nvmf_spec.o 00:02:33.701 CXX test/cpp_headers/opal_spec.o 00:02:33.701 CXX test/cpp_headers/opal.o 00:02:33.701 CXX test/cpp_headers/pci_ids.o 00:02:33.701 CXX test/cpp_headers/queue.o 00:02:33.701 CXX test/cpp_headers/pipe.o 00:02:33.701 CXX test/cpp_headers/reduce.o 00:02:33.701 CXX test/cpp_headers/scheduler.o 00:02:33.701 CXX test/cpp_headers/rpc.o 00:02:33.701 CXX test/cpp_headers/scsi.o 00:02:33.701 CXX test/cpp_headers/scsi_spec.o 00:02:33.701 CC test/event/reactor/reactor.o 00:02:33.701 CC examples/vmd/lsvmd/lsvmd.o 00:02:33.701 CC test/env/memory/memory_ut.o 00:02:33.701 CC examples/vmd/led/led.o 00:02:33.701 CC test/app/stub/stub.o 00:02:33.701 CC test/env/vtophys/vtophys.o 00:02:33.701 CC examples/util/zipf/zipf.o 00:02:33.701 CC test/event/reactor_perf/reactor_perf.o 00:02:33.701 CC test/env/pci/pci_ut.o 00:02:33.701 CC test/app/histogram_perf/histogram_perf.o 00:02:33.701 CC test/event/event_perf/event_perf.o 00:02:33.701 CC test/nvme/reset/reset.o 00:02:33.701 CC test/nvme/err_injection/err_injection.o 00:02:33.701 CC test/app/jsoncat/jsoncat.o 00:02:33.969 CC test/nvme/aer/aer.o 00:02:33.969 CC test/nvme/overhead/overhead.o 00:02:33.969 CC test/event/app_repeat/app_repeat.o 00:02:33.969 CC app/fio/nvme/fio_plugin.o 00:02:33.969 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:33.969 CC test/bdev/bdevio/bdevio.o 00:02:33.969 CC test/nvme/sgl/sgl.o 00:02:33.969 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:33.969 CC test/nvme/startup/startup.o 00:02:33.969 CC test/nvme/connect_stress/connect_stress.o 00:02:33.969 CXX test/cpp_headers/sock.o 00:02:33.969 CC test/nvme/e2edp/nvme_dp.o 00:02:33.969 CC test/nvme/boot_partition/boot_partition.o 00:02:33.969 CC examples/ioat/verify/verify.o 00:02:33.969 CC test/nvme/reserve/reserve.o 00:02:33.969 CC examples/ioat/perf/perf.o 00:02:33.969 CC test/nvme/fdp/fdp.o 00:02:33.969 CC test/nvme/compliance/nvme_compliance.o 00:02:33.969 CC test/nvme/fused_ordering/fused_ordering.o 00:02:33.969 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:33.969 CC test/dma/test_dma/test_dma.o 00:02:33.969 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:33.969 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:33.969 CC test/nvme/simple_copy/simple_copy.o 00:02:33.969 CC test/blobfs/mkfs/mkfs.o 00:02:33.969 CC test/thread/poller_perf/poller_perf.o 00:02:33.969 CC test/app/bdev_svc/bdev_svc.o 00:02:33.969 CC examples/nvme/hotplug/hotplug.o 00:02:33.969 CC examples/accel/perf/accel_perf.o 00:02:33.969 CC test/nvme/cuse/cuse.o 00:02:33.969 CC examples/sock/hello_world/hello_sock.o 00:02:33.969 CC examples/nvme/reconnect/reconnect.o 00:02:33.969 CC examples/idxd/perf/perf.o 00:02:33.969 CC test/event/scheduler/scheduler.o 00:02:33.969 CC examples/nvme/arbitration/arbitration.o 00:02:33.969 CC examples/bdev/bdevperf/bdevperf.o 00:02:33.969 CC examples/nvme/hello_world/hello_world.o 00:02:33.969 CC test/accel/dif/dif.o 00:02:33.969 CC examples/thread/thread/thread_ex.o 00:02:33.969 CC examples/blob/cli/blobcli.o 00:02:33.969 CC examples/nvmf/nvmf/nvmf.o 00:02:33.969 CC examples/bdev/hello_world/hello_bdev.o 00:02:33.969 CC examples/nvme/abort/abort.o 00:02:33.969 CC app/fio/bdev/fio_plugin.o 00:02:33.969 CC examples/blob/hello_world/hello_blob.o 00:02:33.969 LINK rpc_client_test 00:02:33.969 LINK spdk_lspci 00:02:34.243 LINK spdk_nvme_discover 00:02:34.243 LINK iscsi_tgt 00:02:34.243 CC test/lvol/esnap/esnap.o 00:02:34.243 LINK nvmf_tgt 00:02:34.243 CC test/env/mem_callbacks/mem_callbacks.o 00:02:34.507 LINK vhost 00:02:34.507 LINK interrupt_tgt 00:02:34.507 LINK reactor 00:02:34.507 LINK spdk_trace_record 00:02:34.507 LINK spdk_tgt 00:02:34.507 LINK lsvmd 00:02:34.507 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:34.507 LINK jsoncat 00:02:34.507 LINK vtophys 00:02:34.507 LINK reactor_perf 00:02:34.507 LINK event_perf 00:02:34.507 LINK boot_partition 00:02:34.507 LINK led 00:02:34.507 LINK zipf 00:02:34.507 CXX test/cpp_headers/stdinc.o 00:02:34.507 CXX test/cpp_headers/string.o 00:02:34.507 CXX test/cpp_headers/thread.o 00:02:34.507 LINK stub 00:02:34.507 CXX test/cpp_headers/trace.o 00:02:34.507 LINK doorbell_aers 00:02:34.507 CXX test/cpp_headers/trace_parser.o 00:02:34.507 LINK histogram_perf 00:02:34.507 CXX test/cpp_headers/tree.o 00:02:34.507 CXX test/cpp_headers/ublk.o 00:02:34.507 CXX test/cpp_headers/util.o 00:02:34.507 CXX test/cpp_headers/uuid.o 00:02:34.507 CXX test/cpp_headers/version.o 00:02:34.507 LINK err_injection 00:02:34.507 CXX test/cpp_headers/vfio_user_pci.o 00:02:34.507 CXX test/cpp_headers/vfio_user_spec.o 00:02:34.507 LINK env_dpdk_post_init 00:02:34.507 CXX test/cpp_headers/vhost.o 00:02:34.507 CXX test/cpp_headers/vmd.o 00:02:34.507 CXX test/cpp_headers/xor.o 00:02:34.507 CXX test/cpp_headers/zipf.o 00:02:34.507 LINK connect_stress 00:02:34.507 LINK poller_perf 00:02:34.507 LINK spdk_dd 00:02:34.507 LINK reserve 00:02:34.507 LINK app_repeat 00:02:34.507 LINK reset 00:02:34.507 LINK pmr_persistence 00:02:34.507 LINK bdev_svc 00:02:34.766 LINK startup 00:02:34.766 LINK fused_ordering 00:02:34.766 LINK mkfs 00:02:34.766 LINK overhead 00:02:34.766 LINK cmb_copy 00:02:34.766 LINK verify 00:02:34.766 LINK hello_sock 00:02:34.766 LINK nvme_dp 00:02:34.766 LINK scheduler 00:02:34.766 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:34.766 LINK hotplug 00:02:34.767 LINK ioat_perf 00:02:34.767 LINK hello_bdev 00:02:34.767 LINK hello_world 00:02:34.767 LINK sgl 00:02:34.767 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:34.767 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:34.767 LINK aer 00:02:34.767 LINK simple_copy 00:02:34.767 LINK nvmf 00:02:34.767 LINK hello_blob 00:02:34.767 LINK fdp 00:02:34.767 LINK thread 00:02:34.767 LINK reconnect 00:02:34.767 LINK spdk_trace 00:02:34.767 LINK accel_perf 00:02:34.767 LINK bdevio 00:02:34.767 LINK test_dma 00:02:34.767 LINK nvme_compliance 00:02:34.767 LINK abort 00:02:34.767 LINK dif 00:02:34.767 LINK pci_ut 00:02:35.028 LINK idxd_perf 00:02:35.028 LINK arbitration 00:02:35.028 LINK nvme_manage 00:02:35.028 LINK spdk_nvme_identify 00:02:35.028 LINK spdk_nvme 00:02:35.028 LINK spdk_top 00:02:35.028 LINK spdk_bdev 00:02:35.028 LINK nvme_fuzz 00:02:35.028 LINK blobcli 00:02:35.028 LINK vhost_fuzz 00:02:35.290 LINK spdk_nvme_perf 00:02:35.290 LINK bdevperf 00:02:35.290 LINK mem_callbacks 00:02:35.290 LINK memory_ut 00:02:35.553 LINK cuse 00:02:36.125 LINK iscsi_fuzz 00:02:38.041 LINK esnap 00:02:38.615 00:02:38.615 real 0m49.042s 00:02:38.615 user 6m32.004s 00:02:38.615 sys 5m0.795s 00:02:38.615 02:22:12 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:38.615 02:22:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.615 ************************************ 00:02:38.615 END TEST make 00:02:38.615 ************************************ 00:02:38.615 02:22:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:38.615 02:22:12 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:38.615 02:22:12 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:38.615 02:22:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.615 02:22:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:38.615 02:22:12 -- pm/common@45 -- $ pid=3984391 00:02:38.615 02:22:12 -- pm/common@52 -- $ sudo kill -TERM 3984391 00:02:38.615 02:22:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.615 02:22:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:38.615 02:22:12 -- pm/common@45 -- $ pid=3984392 00:02:38.615 02:22:12 -- pm/common@52 -- $ sudo kill -TERM 3984392 00:02:38.615 02:22:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.615 02:22:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:38.615 02:22:12 -- pm/common@45 -- $ pid=3984393 00:02:38.615 02:22:12 -- pm/common@52 -- $ sudo kill -TERM 3984393 00:02:38.615 02:22:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.615 02:22:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:38.615 02:22:12 -- pm/common@45 -- $ pid=3984396 00:02:38.615 02:22:12 -- pm/common@52 -- $ sudo kill -TERM 3984396 00:02:38.877 02:22:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:38.877 02:22:12 -- nvmf/common.sh@7 -- # uname -s 00:02:38.877 02:22:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:38.877 02:22:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:38.877 02:22:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:38.877 02:22:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:38.877 02:22:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:38.877 02:22:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:38.877 02:22:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:38.877 02:22:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:38.877 02:22:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:38.877 02:22:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:38.877 02:22:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:38.877 02:22:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:38.877 02:22:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:38.877 02:22:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:38.877 02:22:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:38.877 02:22:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:38.877 02:22:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:38.877 02:22:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:38.877 02:22:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.877 02:22:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.877 02:22:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.877 02:22:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.877 02:22:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.877 02:22:12 -- paths/export.sh@5 -- # export PATH 00:02:38.877 02:22:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.877 02:22:12 -- nvmf/common.sh@47 -- # : 0 00:02:38.877 02:22:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:38.877 02:22:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:38.878 02:22:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:38.878 02:22:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:38.878 02:22:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:38.878 02:22:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:38.878 02:22:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:38.878 02:22:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:38.878 02:22:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:38.878 02:22:12 -- spdk/autotest.sh@32 -- # uname -s 00:02:38.878 02:22:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:38.878 02:22:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:38.878 02:22:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:38.878 02:22:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:38.878 02:22:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:38.878 02:22:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:38.878 02:22:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:38.878 02:22:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:38.878 02:22:12 -- spdk/autotest.sh@48 -- # udevadm_pid=4047261 00:02:38.878 02:22:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:38.878 02:22:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:38.878 02:22:12 -- pm/common@17 -- # local monitor 00:02:38.878 02:22:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.878 02:22:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4047263 00:02:38.878 02:22:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.878 02:22:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4047266 00:02:38.878 02:22:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.878 02:22:12 -- pm/common@21 -- # date +%s 00:02:38.878 02:22:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4047268 00:02:38.878 02:22:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.878 02:22:12 -- pm/common@21 -- # date +%s 00:02:38.878 02:22:12 -- pm/common@21 -- # date +%s 00:02:38.878 02:22:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=4047271 00:02:38.878 02:22:12 -- pm/common@26 -- # sleep 1 00:02:38.878 02:22:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714177332 00:02:38.878 02:22:12 -- pm/common@21 -- # date +%s 00:02:38.878 02:22:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714177332 00:02:38.878 02:22:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714177332 00:02:38.878 02:22:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714177332 00:02:38.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714177332_collect-cpu-temp.pm.log 00:02:38.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714177332_collect-vmstat.pm.log 00:02:38.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714177332_collect-bmc-pm.bmc.pm.log 00:02:38.878 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714177332_collect-cpu-load.pm.log 00:02:39.841 02:22:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:39.841 02:22:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:39.841 02:22:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:39.841 02:22:13 -- common/autotest_common.sh@10 -- # set +x 00:02:39.841 02:22:13 -- spdk/autotest.sh@59 -- # create_test_list 00:02:39.841 02:22:13 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:39.841 02:22:13 -- common/autotest_common.sh@10 -- # set +x 00:02:39.841 02:22:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:39.841 02:22:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.841 02:22:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.841 02:22:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:39.841 02:22:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.841 02:22:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:39.841 02:22:13 -- common/autotest_common.sh@1441 -- # uname 00:02:39.841 02:22:13 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:39.841 02:22:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:39.841 02:22:13 -- common/autotest_common.sh@1461 -- # uname 00:02:39.841 02:22:13 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:39.841 02:22:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:39.841 02:22:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:39.841 02:22:13 -- spdk/autotest.sh@72 -- # hash lcov 00:02:39.841 02:22:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:39.841 02:22:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:39.841 --rc lcov_branch_coverage=1 00:02:39.841 --rc lcov_function_coverage=1 00:02:39.841 --rc genhtml_branch_coverage=1 00:02:39.841 --rc genhtml_function_coverage=1 00:02:39.841 --rc genhtml_legend=1 00:02:39.841 --rc geninfo_all_blocks=1 00:02:39.841 ' 00:02:39.841 02:22:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:39.841 --rc lcov_branch_coverage=1 00:02:39.841 --rc lcov_function_coverage=1 00:02:39.841 --rc genhtml_branch_coverage=1 00:02:39.841 --rc genhtml_function_coverage=1 00:02:39.841 --rc genhtml_legend=1 00:02:39.841 --rc geninfo_all_blocks=1 00:02:39.841 ' 00:02:39.841 02:22:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:39.841 --rc lcov_branch_coverage=1 00:02:39.841 --rc lcov_function_coverage=1 00:02:39.841 --rc genhtml_branch_coverage=1 00:02:39.841 --rc genhtml_function_coverage=1 00:02:39.841 --rc genhtml_legend=1 00:02:39.841 --rc geninfo_all_blocks=1 00:02:39.841 --no-external' 00:02:39.841 02:22:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:39.841 --rc lcov_branch_coverage=1 00:02:39.841 --rc lcov_function_coverage=1 00:02:39.841 --rc genhtml_branch_coverage=1 00:02:39.841 --rc genhtml_function_coverage=1 00:02:39.841 --rc genhtml_legend=1 00:02:39.841 --rc geninfo_all_blocks=1 00:02:39.841 --no-external' 00:02:39.841 02:22:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:40.102 lcov: LCOV version 1.14 00:02:40.102 02:22:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:48.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:48.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:48.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:48.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:51.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:51.551 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:01.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:01.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:01.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:01.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:01.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:09.704 02:22:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:09.704 02:22:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:09.704 02:22:41 -- common/autotest_common.sh@10 -- # set +x 00:03:09.704 02:22:41 -- spdk/autotest.sh@91 -- # rm -f 00:03:09.704 02:22:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.620 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.620 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.620 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.620 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:11.621 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.621 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.621 02:22:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:11.621 02:22:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:11.621 02:22:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:11.621 02:22:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:11.621 02:22:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:11.621 02:22:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:11.621 02:22:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:11.621 02:22:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.621 02:22:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:11.621 02:22:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:11.621 02:22:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.621 02:22:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:11.621 02:22:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:11.621 02:22:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:11.621 02:22:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.882 No valid GPT data, bailing 00:03:11.882 02:22:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.882 02:22:45 -- scripts/common.sh@391 -- # pt= 00:03:11.882 02:22:45 -- scripts/common.sh@392 -- # return 1 00:03:11.882 02:22:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.882 1+0 records in 00:03:11.882 1+0 records out 00:03:11.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00220115 s, 476 MB/s 00:03:11.882 02:22:45 -- spdk/autotest.sh@118 -- # sync 00:03:11.882 02:22:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.882 02:22:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.882 02:22:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:20.122 02:22:53 -- spdk/autotest.sh@124 -- # uname -s 00:03:20.122 02:22:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:20.122 02:22:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:20.122 02:22:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.122 02:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.122 02:22:53 -- common/autotest_common.sh@10 -- # set +x 00:03:20.122 ************************************ 00:03:20.122 START TEST setup.sh 00:03:20.122 ************************************ 00:03:20.122 02:22:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:20.122 * Looking for test storage... 00:03:20.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.122 02:22:53 -- setup/test-setup.sh@10 -- # uname -s 00:03:20.122 02:22:53 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:20.122 02:22:53 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:20.122 02:22:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.122 02:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.122 02:22:53 -- common/autotest_common.sh@10 -- # set +x 00:03:20.122 ************************************ 00:03:20.122 START TEST acl 00:03:20.122 ************************************ 00:03:20.122 02:22:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:20.415 * Looking for test storage... 00:03:20.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.415 02:22:53 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:20.415 02:22:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:20.415 02:22:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:20.415 02:22:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:20.415 02:22:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:20.415 02:22:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:20.415 02:22:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:20.415 02:22:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.415 02:22:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:20.415 02:22:53 -- setup/acl.sh@12 -- # devs=() 00:03:20.415 02:22:53 -- setup/acl.sh@12 -- # declare -a devs 00:03:20.415 02:22:53 -- setup/acl.sh@13 -- # drivers=() 00:03:20.415 02:22:53 -- setup/acl.sh@13 -- # declare -A drivers 00:03:20.415 02:22:53 -- setup/acl.sh@51 -- # setup reset 00:03:20.415 02:22:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.415 02:22:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.625 02:22:57 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:24.626 02:22:57 -- setup/acl.sh@16 -- # local dev driver 00:03:24.626 02:22:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.626 02:22:57 -- setup/acl.sh@15 -- # setup output status 00:03:24.626 02:22:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.626 02:22:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:27.177 Hugepages 00:03:27.177 node hugesize free / total 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 00:03:27.177 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.177 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.177 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.177 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:27.439 02:23:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.439 02:23:00 -- setup/acl.sh@20 -- # continue 00:03:27.439 02:23:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.439 02:23:00 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:27.439 02:23:00 -- setup/acl.sh@54 -- # run_test denied denied 00:03:27.439 02:23:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.439 02:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.439 02:23:00 -- common/autotest_common.sh@10 -- # set +x 00:03:27.701 ************************************ 00:03:27.701 START TEST denied 00:03:27.701 ************************************ 00:03:27.701 02:23:01 -- common/autotest_common.sh@1111 -- # denied 00:03:27.701 02:23:01 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:27.701 02:23:01 -- setup/acl.sh@38 -- # setup output config 00:03:27.701 02:23:01 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:27.701 02:23:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.701 02:23:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.007 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:31.007 02:23:04 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:31.007 02:23:04 -- setup/acl.sh@28 -- # local dev driver 00:03:31.007 02:23:04 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.007 02:23:04 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:31.007 02:23:04 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:31.007 02:23:04 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.007 02:23:04 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.007 02:23:04 -- setup/acl.sh@41 -- # setup reset 00:03:31.007 02:23:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.007 02:23:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.306 00:03:36.306 real 0m7.823s 00:03:36.306 user 0m2.588s 00:03:36.306 sys 0m4.540s 00:03:36.306 02:23:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:36.306 02:23:08 -- common/autotest_common.sh@10 -- # set +x 00:03:36.306 ************************************ 00:03:36.306 END TEST denied 00:03:36.306 ************************************ 00:03:36.306 02:23:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:36.306 02:23:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.306 02:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.306 02:23:08 -- common/autotest_common.sh@10 -- # set +x 00:03:36.306 ************************************ 00:03:36.306 START TEST allowed 00:03:36.306 ************************************ 00:03:36.306 02:23:09 -- common/autotest_common.sh@1111 -- # allowed 00:03:36.306 02:23:09 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:36.306 02:23:09 -- setup/acl.sh@45 -- # setup output config 00:03:36.306 02:23:09 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:36.306 02:23:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.306 02:23:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.596 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:41.596 02:23:14 -- setup/acl.sh@47 -- # verify 00:03:41.596 02:23:14 -- setup/acl.sh@28 -- # local dev driver 00:03:41.596 02:23:14 -- setup/acl.sh@48 -- # setup reset 00:03:41.596 02:23:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.596 02:23:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.900 00:03:44.900 real 0m8.992s 00:03:44.900 user 0m2.609s 00:03:44.900 sys 0m4.696s 00:03:44.900 02:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:44.900 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:03:44.900 ************************************ 00:03:44.900 END TEST allowed 00:03:44.900 ************************************ 00:03:44.900 00:03:44.900 real 0m24.450s 00:03:44.900 user 0m8.094s 00:03:44.900 sys 0m14.150s 00:03:44.900 02:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:44.900 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:03:44.900 ************************************ 00:03:44.900 END TEST acl 00:03:44.900 ************************************ 00:03:44.900 02:23:18 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:44.900 02:23:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.900 02:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.900 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:03:44.900 ************************************ 00:03:44.900 START TEST hugepages 00:03:44.900 ************************************ 00:03:44.900 02:23:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:44.900 * Looking for test storage... 00:03:44.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:44.901 02:23:18 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:44.901 02:23:18 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:44.901 02:23:18 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:44.901 02:23:18 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:44.901 02:23:18 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:44.901 02:23:18 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:44.901 02:23:18 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:44.901 02:23:18 -- setup/common.sh@18 -- # local node= 00:03:44.901 02:23:18 -- setup/common.sh@19 -- # local var val 00:03:44.901 02:23:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.901 02:23:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.901 02:23:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.901 02:23:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.901 02:23:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.901 02:23:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 99018836 kB' 'MemAvailable: 103325664 kB' 'Buffers: 2696 kB' 'Cached: 18200996 kB' 'SwapCached: 0 kB' 'Active: 15142564 kB' 'Inactive: 3667940 kB' 'Active(anon): 14018620 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610028 kB' 'Mapped: 202884 kB' 'Shmem: 13411808 kB' 'KReclaimable: 555888 kB' 'Slab: 1437268 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 881380 kB' 'KernelStack: 27456 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460904 kB' 'Committed_AS: 15530852 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235656 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.901 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.901 02:23:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.902 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.902 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # continue 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.903 02:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.903 02:23:18 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.903 02:23:18 -- setup/common.sh@33 -- # echo 2048 00:03:44.903 02:23:18 -- setup/common.sh@33 -- # return 0 00:03:44.903 02:23:18 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:44.903 02:23:18 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:44.903 02:23:18 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:44.903 02:23:18 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:44.903 02:23:18 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:44.903 02:23:18 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:44.903 02:23:18 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:44.903 02:23:18 -- setup/hugepages.sh@207 -- # get_nodes 00:03:44.903 02:23:18 -- setup/hugepages.sh@27 -- # local node 00:03:44.903 02:23:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.903 02:23:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:44.903 02:23:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.903 02:23:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.903 02:23:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.903 02:23:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.903 02:23:18 -- setup/hugepages.sh@208 -- # clear_hp 00:03:44.903 02:23:18 -- setup/hugepages.sh@37 -- # local node hp 00:03:44.903 02:23:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.903 02:23:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.903 02:23:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.903 02:23:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.903 02:23:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.903 02:23:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.903 02:23:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.903 02:23:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.903 02:23:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.903 02:23:18 -- setup/hugepages.sh@41 -- # echo 0 00:03:44.903 02:23:18 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.903 02:23:18 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.903 02:23:18 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:44.903 02:23:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.903 02:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.903 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:03:45.164 ************************************ 00:03:45.164 START TEST default_setup 00:03:45.164 ************************************ 00:03:45.164 02:23:18 -- common/autotest_common.sh@1111 -- # default_setup 00:03:45.164 02:23:18 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.164 02:23:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.164 02:23:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.164 02:23:18 -- setup/hugepages.sh@51 -- # shift 00:03:45.164 02:23:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.164 02:23:18 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.164 02:23:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.164 02:23:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.164 02:23:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.164 02:23:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.164 02:23:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.164 02:23:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.164 02:23:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.164 02:23:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.164 02:23:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.164 02:23:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.164 02:23:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.164 02:23:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.164 02:23:18 -- setup/hugepages.sh@73 -- # return 0 00:03:45.164 02:23:18 -- setup/hugepages.sh@137 -- # setup output 00:03:45.164 02:23:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.164 02:23:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.470 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.470 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:48.470 02:23:22 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:48.470 02:23:22 -- setup/hugepages.sh@89 -- # local node 00:03:48.470 02:23:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.470 02:23:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.470 02:23:22 -- setup/hugepages.sh@92 -- # local surp 00:03:48.470 02:23:22 -- setup/hugepages.sh@93 -- # local resv 00:03:48.470 02:23:22 -- setup/hugepages.sh@94 -- # local anon 00:03:48.470 02:23:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.470 02:23:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.470 02:23:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.470 02:23:22 -- setup/common.sh@18 -- # local node= 00:03:48.470 02:23:22 -- setup/common.sh@19 -- # local var val 00:03:48.470 02:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.470 02:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.470 02:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.470 02:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.470 02:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.470 02:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.470 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.470 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101220152 kB' 'MemAvailable: 105526980 kB' 'Buffers: 2696 kB' 'Cached: 18201116 kB' 'SwapCached: 0 kB' 'Active: 15156328 kB' 'Inactive: 3667940 kB' 'Active(anon): 14032384 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623912 kB' 'Mapped: 203152 kB' 'Shmem: 13411928 kB' 'KReclaimable: 555888 kB' 'Slab: 1435200 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879312 kB' 'KernelStack: 27472 kB' 'PageTables: 9944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15549184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235624 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.471 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.471 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.472 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.472 02:23:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.472 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.472 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.472 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.472 02:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.472 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.472 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.472 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.472 02:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.472 02:23:22 -- setup/common.sh@33 -- # echo 0 00:03:48.472 02:23:22 -- setup/common.sh@33 -- # return 0 00:03:48.472 02:23:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.737 02:23:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.737 02:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.737 02:23:22 -- setup/common.sh@18 -- # local node= 00:03:48.737 02:23:22 -- setup/common.sh@19 -- # local var val 00:03:48.737 02:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.737 02:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.737 02:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.737 02:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.737 02:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.737 02:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101220408 kB' 'MemAvailable: 105527236 kB' 'Buffers: 2696 kB' 'Cached: 18201120 kB' 'SwapCached: 0 kB' 'Active: 15155800 kB' 'Inactive: 3667940 kB' 'Active(anon): 14031856 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623420 kB' 'Mapped: 203140 kB' 'Shmem: 13411932 kB' 'KReclaimable: 555888 kB' 'Slab: 1435200 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879312 kB' 'KernelStack: 27456 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15549196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235592 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.737 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.737 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.738 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.738 02:23:22 -- setup/common.sh@33 -- # echo 0 00:03:48.738 02:23:22 -- setup/common.sh@33 -- # return 0 00:03:48.738 02:23:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.738 02:23:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.738 02:23:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.738 02:23:22 -- setup/common.sh@18 -- # local node= 00:03:48.738 02:23:22 -- setup/common.sh@19 -- # local var val 00:03:48.738 02:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.738 02:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.738 02:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.738 02:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.738 02:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.738 02:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.738 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101221000 kB' 'MemAvailable: 105527828 kB' 'Buffers: 2696 kB' 'Cached: 18201132 kB' 'SwapCached: 0 kB' 'Active: 15155996 kB' 'Inactive: 3667940 kB' 'Active(anon): 14032052 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623604 kB' 'Mapped: 203140 kB' 'Shmem: 13411944 kB' 'KReclaimable: 555888 kB' 'Slab: 1435192 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879304 kB' 'KernelStack: 27472 kB' 'PageTables: 9960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15549212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235592 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.739 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.739 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.740 02:23:22 -- setup/common.sh@33 -- # echo 0 00:03:48.740 02:23:22 -- setup/common.sh@33 -- # return 0 00:03:48.740 02:23:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.740 02:23:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.740 nr_hugepages=1024 00:03:48.740 02:23:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.740 resv_hugepages=0 00:03:48.740 02:23:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.740 surplus_hugepages=0 00:03:48.740 02:23:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.740 anon_hugepages=0 00:03:48.740 02:23:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.740 02:23:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.740 02:23:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.740 02:23:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.740 02:23:22 -- setup/common.sh@18 -- # local node= 00:03:48.740 02:23:22 -- setup/common.sh@19 -- # local var val 00:03:48.740 02:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.740 02:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.740 02:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.740 02:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.740 02:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.740 02:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101221580 kB' 'MemAvailable: 105528408 kB' 'Buffers: 2696 kB' 'Cached: 18201144 kB' 'SwapCached: 0 kB' 'Active: 15155832 kB' 'Inactive: 3667940 kB' 'Active(anon): 14031888 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623428 kB' 'Mapped: 203140 kB' 'Shmem: 13411956 kB' 'KReclaimable: 555888 kB' 'Slab: 1435192 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879304 kB' 'KernelStack: 27456 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15549224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235592 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.740 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.740 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.741 02:23:22 -- setup/common.sh@33 -- # echo 1024 00:03:48.741 02:23:22 -- setup/common.sh@33 -- # return 0 00:03:48.741 02:23:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.741 02:23:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.741 02:23:22 -- setup/hugepages.sh@27 -- # local node 00:03:48.741 02:23:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.741 02:23:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.741 02:23:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.741 02:23:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.741 02:23:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.741 02:23:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.741 02:23:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.741 02:23:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.741 02:23:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.741 02:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.741 02:23:22 -- setup/common.sh@18 -- # local node=0 00:03:48.741 02:23:22 -- setup/common.sh@19 -- # local var val 00:03:48.741 02:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.741 02:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.741 02:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.741 02:23:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.741 02:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.741 02:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.741 02:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54302156 kB' 'MemUsed: 11356852 kB' 'SwapCached: 0 kB' 'Active: 6933260 kB' 'Inactive: 283868 kB' 'Active(anon): 6237332 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7135652 kB' 'Mapped: 68412 kB' 'AnonPages: 84852 kB' 'Shmem: 6155856 kB' 'KernelStack: 13352 kB' 'PageTables: 3448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679156 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.741 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.741 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.742 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.742 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.743 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.743 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.743 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.743 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # continue 00:03:48.743 02:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.743 02:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.743 02:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.743 02:23:22 -- setup/common.sh@33 -- # echo 0 00:03:48.743 02:23:22 -- setup/common.sh@33 -- # return 0 00:03:48.743 02:23:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.743 02:23:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.743 02:23:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.743 02:23:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.743 02:23:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.743 node0=1024 expecting 1024 00:03:48.743 02:23:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.743 00:03:48.743 real 0m3.619s 00:03:48.743 user 0m1.387s 00:03:48.743 sys 0m2.209s 00:03:48.743 02:23:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.743 02:23:22 -- common/autotest_common.sh@10 -- # set +x 00:03:48.743 ************************************ 00:03:48.743 END TEST default_setup 00:03:48.743 ************************************ 00:03:48.743 02:23:22 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:48.743 02:23:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.743 02:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.743 02:23:22 -- common/autotest_common.sh@10 -- # set +x 00:03:49.004 ************************************ 00:03:49.004 START TEST per_node_1G_alloc 00:03:49.004 ************************************ 00:03:49.004 02:23:22 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:49.004 02:23:22 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:49.004 02:23:22 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:49.004 02:23:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:49.004 02:23:22 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:49.004 02:23:22 -- setup/hugepages.sh@51 -- # shift 00:03:49.004 02:23:22 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:49.004 02:23:22 -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.004 02:23:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.004 02:23:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:49.004 02:23:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:49.004 02:23:22 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:49.004 02:23:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.004 02:23:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:49.004 02:23:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.004 02:23:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.004 02:23:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.004 02:23:22 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:49.004 02:23:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.004 02:23:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:49.004 02:23:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.004 02:23:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:49.004 02:23:22 -- setup/hugepages.sh@73 -- # return 0 00:03:49.004 02:23:22 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:49.004 02:23:22 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:49.004 02:23:22 -- setup/hugepages.sh@146 -- # setup output 00:03:49.004 02:23:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.004 02:23:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.315 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:52.315 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.315 02:23:25 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.315 02:23:25 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.315 02:23:25 -- setup/hugepages.sh@89 -- # local node 00:03:52.315 02:23:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.315 02:23:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.315 02:23:25 -- setup/hugepages.sh@92 -- # local surp 00:03:52.315 02:23:25 -- setup/hugepages.sh@93 -- # local resv 00:03:52.315 02:23:25 -- setup/hugepages.sh@94 -- # local anon 00:03:52.315 02:23:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.315 02:23:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.315 02:23:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.315 02:23:25 -- setup/common.sh@18 -- # local node= 00:03:52.315 02:23:25 -- setup/common.sh@19 -- # local var val 00:03:52.315 02:23:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.315 02:23:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.315 02:23:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.315 02:23:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.315 02:23:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.315 02:23:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101197576 kB' 'MemAvailable: 105504404 kB' 'Buffers: 2696 kB' 'Cached: 18201248 kB' 'SwapCached: 0 kB' 'Active: 15155244 kB' 'Inactive: 3667940 kB' 'Active(anon): 14031300 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622048 kB' 'Mapped: 202240 kB' 'Shmem: 13412060 kB' 'KReclaimable: 555888 kB' 'Slab: 1435480 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879592 kB' 'KernelStack: 27440 kB' 'PageTables: 9764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15533712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235784 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.315 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.315 02:23:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.316 02:23:25 -- setup/common.sh@33 -- # echo 0 00:03:52.316 02:23:25 -- setup/common.sh@33 -- # return 0 00:03:52.316 02:23:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.316 02:23:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.316 02:23:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.316 02:23:25 -- setup/common.sh@18 -- # local node= 00:03:52.316 02:23:25 -- setup/common.sh@19 -- # local var val 00:03:52.316 02:23:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.316 02:23:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.316 02:23:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.316 02:23:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.316 02:23:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.316 02:23:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101198268 kB' 'MemAvailable: 105505096 kB' 'Buffers: 2696 kB' 'Cached: 18201248 kB' 'SwapCached: 0 kB' 'Active: 15153880 kB' 'Inactive: 3667940 kB' 'Active(anon): 14029936 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621116 kB' 'Mapped: 202128 kB' 'Shmem: 13412060 kB' 'KReclaimable: 555888 kB' 'Slab: 1435448 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879560 kB' 'KernelStack: 27408 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15533724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235768 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.316 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.316 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.317 02:23:25 -- setup/common.sh@33 -- # echo 0 00:03:52.317 02:23:25 -- setup/common.sh@33 -- # return 0 00:03:52.317 02:23:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.317 02:23:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.317 02:23:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.317 02:23:25 -- setup/common.sh@18 -- # local node= 00:03:52.317 02:23:25 -- setup/common.sh@19 -- # local var val 00:03:52.317 02:23:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.317 02:23:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.317 02:23:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.317 02:23:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.317 02:23:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.317 02:23:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.317 02:23:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101198268 kB' 'MemAvailable: 105505096 kB' 'Buffers: 2696 kB' 'Cached: 18201260 kB' 'SwapCached: 0 kB' 'Active: 15153868 kB' 'Inactive: 3667940 kB' 'Active(anon): 14029924 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621116 kB' 'Mapped: 202128 kB' 'Shmem: 13412072 kB' 'KReclaimable: 555888 kB' 'Slab: 1435448 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879560 kB' 'KernelStack: 27408 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15533744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235768 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.317 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.317 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.318 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.318 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.319 02:23:25 -- setup/common.sh@33 -- # echo 0 00:03:52.319 02:23:25 -- setup/common.sh@33 -- # return 0 00:03:52.319 02:23:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.319 02:23:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.319 nr_hugepages=1024 00:03:52.319 02:23:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.319 resv_hugepages=0 00:03:52.319 02:23:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.319 surplus_hugepages=0 00:03:52.319 02:23:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.319 anon_hugepages=0 00:03:52.319 02:23:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.319 02:23:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.319 02:23:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.319 02:23:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.319 02:23:25 -- setup/common.sh@18 -- # local node= 00:03:52.319 02:23:25 -- setup/common.sh@19 -- # local var val 00:03:52.319 02:23:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.319 02:23:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.319 02:23:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.319 02:23:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.319 02:23:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.319 02:23:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101197408 kB' 'MemAvailable: 105504236 kB' 'Buffers: 2696 kB' 'Cached: 18201260 kB' 'SwapCached: 0 kB' 'Active: 15154072 kB' 'Inactive: 3667940 kB' 'Active(anon): 14030128 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621372 kB' 'Mapped: 202128 kB' 'Shmem: 13412072 kB' 'KReclaimable: 555888 kB' 'Slab: 1435448 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879560 kB' 'KernelStack: 27440 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15533752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235768 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.319 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.319 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.320 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.320 02:23:25 -- setup/common.sh@33 -- # echo 1024 00:03:52.320 02:23:25 -- setup/common.sh@33 -- # return 0 00:03:52.320 02:23:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.320 02:23:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.320 02:23:25 -- setup/hugepages.sh@27 -- # local node 00:03:52.320 02:23:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.320 02:23:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.320 02:23:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.320 02:23:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.320 02:23:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.320 02:23:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.320 02:23:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.320 02:23:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.320 02:23:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.320 02:23:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.320 02:23:25 -- setup/common.sh@18 -- # local node=0 00:03:52.320 02:23:25 -- setup/common.sh@19 -- # local var val 00:03:52.320 02:23:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.320 02:23:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.320 02:23:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.320 02:23:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.320 02:23:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.320 02:23:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.320 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55351160 kB' 'MemUsed: 10307848 kB' 'SwapCached: 0 kB' 'Active: 6931396 kB' 'Inactive: 283868 kB' 'Active(anon): 6235468 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7135760 kB' 'Mapped: 67392 kB' 'AnonPages: 82724 kB' 'Shmem: 6155964 kB' 'KernelStack: 13240 kB' 'PageTables: 2924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679196 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.321 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.321 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.321 02:23:25 -- setup/common.sh@33 -- # echo 0 00:03:52.321 02:23:25 -- setup/common.sh@33 -- # return 0 00:03:52.321 02:23:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.321 02:23:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.321 02:23:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.321 02:23:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.322 02:23:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.322 02:23:25 -- setup/common.sh@18 -- # local node=1 00:03:52.322 02:23:25 -- setup/common.sh@19 -- # local var val 00:03:52.322 02:23:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.322 02:23:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.322 02:23:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.322 02:23:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.322 02:23:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.322 02:23:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679896 kB' 'MemFree: 45845444 kB' 'MemUsed: 14834452 kB' 'SwapCached: 0 kB' 'Active: 8222444 kB' 'Inactive: 3384072 kB' 'Active(anon): 7794428 kB' 'Inactive(anon): 0 kB' 'Active(file): 428016 kB' 'Inactive(file): 3384072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11068224 kB' 'Mapped: 134736 kB' 'AnonPages: 538328 kB' 'Shmem: 7256136 kB' 'KernelStack: 14136 kB' 'PageTables: 6584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362756 kB' 'Slab: 756252 kB' 'SReclaimable: 362756 kB' 'SUnreclaim: 393496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.322 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.322 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # continue 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.323 02:23:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.323 02:23:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.323 02:23:25 -- setup/common.sh@33 -- # echo 0 00:03:52.323 02:23:25 -- setup/common.sh@33 -- # return 0 00:03:52.323 02:23:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.323 02:23:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.323 02:23:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.323 02:23:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.323 02:23:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.323 node0=512 expecting 512 00:03:52.584 02:23:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.584 02:23:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.584 02:23:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.584 02:23:25 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.584 node1=512 expecting 512 00:03:52.584 02:23:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.584 00:03:52.584 real 0m3.533s 00:03:52.584 user 0m1.399s 00:03:52.585 sys 0m2.187s 00:03:52.585 02:23:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.585 02:23:25 -- common/autotest_common.sh@10 -- # set +x 00:03:52.585 ************************************ 00:03:52.585 END TEST per_node_1G_alloc 00:03:52.585 ************************************ 00:03:52.585 02:23:25 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.585 02:23:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.585 02:23:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.585 02:23:25 -- common/autotest_common.sh@10 -- # set +x 00:03:52.585 ************************************ 00:03:52.585 START TEST even_2G_alloc 00:03:52.585 ************************************ 00:03:52.585 02:23:26 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:52.585 02:23:26 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.585 02:23:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.585 02:23:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.585 02:23:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.585 02:23:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.585 02:23:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.585 02:23:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.585 02:23:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.585 02:23:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.585 02:23:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.585 02:23:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.585 02:23:26 -- setup/hugepages.sh@83 -- # : 512 00:03:52.585 02:23:26 -- setup/hugepages.sh@84 -- # : 1 00:03:52.585 02:23:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.585 02:23:26 -- setup/hugepages.sh@83 -- # : 0 00:03:52.585 02:23:26 -- setup/hugepages.sh@84 -- # : 0 00:03:52.585 02:23:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.585 02:23:26 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.585 02:23:26 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.585 02:23:26 -- setup/hugepages.sh@153 -- # setup output 00:03:52.585 02:23:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.585 02:23:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.893 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.893 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.893 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.160 02:23:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:56.161 02:23:29 -- setup/hugepages.sh@89 -- # local node 00:03:56.161 02:23:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.161 02:23:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.161 02:23:29 -- setup/hugepages.sh@92 -- # local surp 00:03:56.161 02:23:29 -- setup/hugepages.sh@93 -- # local resv 00:03:56.161 02:23:29 -- setup/hugepages.sh@94 -- # local anon 00:03:56.161 02:23:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.161 02:23:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.161 02:23:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.161 02:23:29 -- setup/common.sh@18 -- # local node= 00:03:56.161 02:23:29 -- setup/common.sh@19 -- # local var val 00:03:56.161 02:23:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.161 02:23:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.161 02:23:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.161 02:23:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.161 02:23:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.161 02:23:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101207596 kB' 'MemAvailable: 105514424 kB' 'Buffers: 2696 kB' 'Cached: 18201392 kB' 'SwapCached: 0 kB' 'Active: 15156160 kB' 'Inactive: 3667940 kB' 'Active(anon): 14032216 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622816 kB' 'Mapped: 202788 kB' 'Shmem: 13412204 kB' 'KReclaimable: 555888 kB' 'Slab: 1435652 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879764 kB' 'KernelStack: 27424 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15536028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235752 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.161 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.161 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.162 02:23:29 -- setup/common.sh@33 -- # echo 0 00:03:56.162 02:23:29 -- setup/common.sh@33 -- # return 0 00:03:56.162 02:23:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:56.162 02:23:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.162 02:23:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.162 02:23:29 -- setup/common.sh@18 -- # local node= 00:03:56.162 02:23:29 -- setup/common.sh@19 -- # local var val 00:03:56.162 02:23:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.162 02:23:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.162 02:23:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.162 02:23:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.162 02:23:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.162 02:23:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101208960 kB' 'MemAvailable: 105515788 kB' 'Buffers: 2696 kB' 'Cached: 18201392 kB' 'SwapCached: 0 kB' 'Active: 15159532 kB' 'Inactive: 3667940 kB' 'Active(anon): 14035588 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626220 kB' 'Mapped: 202756 kB' 'Shmem: 13412204 kB' 'KReclaimable: 555888 kB' 'Slab: 1435644 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879756 kB' 'KernelStack: 27440 kB' 'PageTables: 9740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235736 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.162 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.162 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.163 02:23:29 -- setup/common.sh@33 -- # echo 0 00:03:56.163 02:23:29 -- setup/common.sh@33 -- # return 0 00:03:56.163 02:23:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:56.163 02:23:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.163 02:23:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.163 02:23:29 -- setup/common.sh@18 -- # local node= 00:03:56.163 02:23:29 -- setup/common.sh@19 -- # local var val 00:03:56.163 02:23:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.163 02:23:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.163 02:23:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.163 02:23:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.163 02:23:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.163 02:23:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101207648 kB' 'MemAvailable: 105514476 kB' 'Buffers: 2696 kB' 'Cached: 18201404 kB' 'SwapCached: 0 kB' 'Active: 15154852 kB' 'Inactive: 3667940 kB' 'Active(anon): 14030908 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621992 kB' 'Mapped: 202584 kB' 'Shmem: 13412216 kB' 'KReclaimable: 555888 kB' 'Slab: 1435636 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879748 kB' 'KernelStack: 27408 kB' 'PageTables: 9636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15534828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235736 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.163 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.163 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.164 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.164 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.164 02:23:29 -- setup/common.sh@33 -- # echo 0 00:03:56.164 02:23:29 -- setup/common.sh@33 -- # return 0 00:03:56.164 02:23:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:56.164 02:23:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.164 nr_hugepages=1024 00:03:56.164 02:23:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.164 resv_hugepages=0 00:03:56.164 02:23:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.164 surplus_hugepages=0 00:03:56.164 02:23:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.164 anon_hugepages=0 00:03:56.164 02:23:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.164 02:23:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.164 02:23:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.164 02:23:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.165 02:23:29 -- setup/common.sh@18 -- # local node= 00:03:56.165 02:23:29 -- setup/common.sh@19 -- # local var val 00:03:56.165 02:23:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.165 02:23:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.165 02:23:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.165 02:23:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.165 02:23:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.165 02:23:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101208776 kB' 'MemAvailable: 105515604 kB' 'Buffers: 2696 kB' 'Cached: 18201420 kB' 'SwapCached: 0 kB' 'Active: 15155076 kB' 'Inactive: 3667940 kB' 'Active(anon): 14031132 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622248 kB' 'Mapped: 202168 kB' 'Shmem: 13412232 kB' 'KReclaimable: 555888 kB' 'Slab: 1435636 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879748 kB' 'KernelStack: 27424 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15537748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235720 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.165 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.165 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.166 02:23:29 -- setup/common.sh@33 -- # echo 1024 00:03:56.166 02:23:29 -- setup/common.sh@33 -- # return 0 00:03:56.166 02:23:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.166 02:23:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.166 02:23:29 -- setup/hugepages.sh@27 -- # local node 00:03:56.166 02:23:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.166 02:23:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.166 02:23:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.166 02:23:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.166 02:23:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.166 02:23:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.166 02:23:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.166 02:23:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.166 02:23:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.166 02:23:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.166 02:23:29 -- setup/common.sh@18 -- # local node=0 00:03:56.166 02:23:29 -- setup/common.sh@19 -- # local var val 00:03:56.166 02:23:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.166 02:23:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.166 02:23:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.166 02:23:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.166 02:23:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.166 02:23:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55342788 kB' 'MemUsed: 10316220 kB' 'SwapCached: 0 kB' 'Active: 6930512 kB' 'Inactive: 283868 kB' 'Active(anon): 6234584 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7135860 kB' 'Mapped: 67388 kB' 'AnonPages: 81716 kB' 'Shmem: 6156064 kB' 'KernelStack: 13256 kB' 'PageTables: 3028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679468 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.166 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.166 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@33 -- # echo 0 00:03:56.167 02:23:29 -- setup/common.sh@33 -- # return 0 00:03:56.167 02:23:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.167 02:23:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.167 02:23:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.167 02:23:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.167 02:23:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.167 02:23:29 -- setup/common.sh@18 -- # local node=1 00:03:56.167 02:23:29 -- setup/common.sh@19 -- # local var val 00:03:56.167 02:23:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.167 02:23:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.167 02:23:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.167 02:23:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.167 02:23:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.167 02:23:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679896 kB' 'MemFree: 45866504 kB' 'MemUsed: 14813392 kB' 'SwapCached: 0 kB' 'Active: 8224488 kB' 'Inactive: 3384072 kB' 'Active(anon): 7796472 kB' 'Inactive(anon): 0 kB' 'Active(file): 428016 kB' 'Inactive(file): 3384072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11068272 kB' 'Mapped: 134808 kB' 'AnonPages: 540456 kB' 'Shmem: 7256184 kB' 'KernelStack: 14184 kB' 'PageTables: 6308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362756 kB' 'Slab: 756168 kB' 'SReclaimable: 362756 kB' 'SUnreclaim: 393412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.167 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.167 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # continue 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.168 02:23:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.168 02:23:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.168 02:23:29 -- setup/common.sh@33 -- # echo 0 00:03:56.168 02:23:29 -- setup/common.sh@33 -- # return 0 00:03:56.168 02:23:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.168 02:23:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.168 02:23:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.168 02:23:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.168 02:23:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.168 node0=512 expecting 512 00:03:56.168 02:23:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.168 02:23:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.168 02:23:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.168 02:23:29 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.168 node1=512 expecting 512 00:03:56.168 02:23:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.168 00:03:56.168 real 0m3.630s 00:03:56.168 user 0m1.432s 00:03:56.168 sys 0m2.256s 00:03:56.168 02:23:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.168 02:23:29 -- common/autotest_common.sh@10 -- # set +x 00:03:56.168 ************************************ 00:03:56.168 END TEST even_2G_alloc 00:03:56.168 ************************************ 00:03:56.430 02:23:29 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.430 02:23:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.430 02:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.430 02:23:29 -- common/autotest_common.sh@10 -- # set +x 00:03:56.430 ************************************ 00:03:56.430 START TEST odd_alloc 00:03:56.430 ************************************ 00:03:56.430 02:23:29 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:56.430 02:23:29 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.430 02:23:29 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.430 02:23:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.430 02:23:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.430 02:23:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.430 02:23:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.430 02:23:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.430 02:23:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.430 02:23:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.430 02:23:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.430 02:23:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.430 02:23:29 -- setup/hugepages.sh@83 -- # : 513 00:03:56.430 02:23:29 -- setup/hugepages.sh@84 -- # : 1 00:03:56.430 02:23:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:56.430 02:23:29 -- setup/hugepages.sh@83 -- # : 0 00:03:56.430 02:23:29 -- setup/hugepages.sh@84 -- # : 0 00:03:56.430 02:23:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.430 02:23:29 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.430 02:23:29 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.430 02:23:29 -- setup/hugepages.sh@160 -- # setup output 00:03:56.430 02:23:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.430 02:23:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.746 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.746 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.746 02:23:33 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:59.746 02:23:33 -- setup/hugepages.sh@89 -- # local node 00:03:59.746 02:23:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.746 02:23:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.746 02:23:33 -- setup/hugepages.sh@92 -- # local surp 00:03:59.746 02:23:33 -- setup/hugepages.sh@93 -- # local resv 00:03:59.746 02:23:33 -- setup/hugepages.sh@94 -- # local anon 00:03:59.746 02:23:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.746 02:23:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.746 02:23:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.746 02:23:33 -- setup/common.sh@18 -- # local node= 00:03:59.746 02:23:33 -- setup/common.sh@19 -- # local var val 00:03:59.746 02:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.746 02:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.746 02:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.746 02:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.746 02:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.746 02:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101217848 kB' 'MemAvailable: 105524676 kB' 'Buffers: 2696 kB' 'Cached: 18201532 kB' 'SwapCached: 0 kB' 'Active: 15156700 kB' 'Inactive: 3667940 kB' 'Active(anon): 14032756 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623564 kB' 'Mapped: 202268 kB' 'Shmem: 13412344 kB' 'KReclaimable: 555888 kB' 'Slab: 1435136 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879248 kB' 'KernelStack: 27568 kB' 'PageTables: 10084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508456 kB' 'Committed_AS: 15538488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.746 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.747 02:23:33 -- setup/common.sh@33 -- # echo 0 00:03:59.747 02:23:33 -- setup/common.sh@33 -- # return 0 00:03:59.747 02:23:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.747 02:23:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.747 02:23:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.747 02:23:33 -- setup/common.sh@18 -- # local node= 00:03:59.747 02:23:33 -- setup/common.sh@19 -- # local var val 00:03:59.747 02:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.747 02:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.747 02:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.747 02:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.747 02:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.747 02:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101220680 kB' 'MemAvailable: 105527508 kB' 'Buffers: 2696 kB' 'Cached: 18201536 kB' 'SwapCached: 0 kB' 'Active: 15156972 kB' 'Inactive: 3667940 kB' 'Active(anon): 14033028 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623944 kB' 'Mapped: 202244 kB' 'Shmem: 13412348 kB' 'KReclaimable: 555888 kB' 'Slab: 1435204 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879316 kB' 'KernelStack: 27488 kB' 'PageTables: 9508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508456 kB' 'Committed_AS: 15538500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # continue 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 02:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.013 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.013 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.014 02:23:33 -- setup/common.sh@33 -- # echo 0 00:04:00.014 02:23:33 -- setup/common.sh@33 -- # return 0 00:04:00.014 02:23:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:00.014 02:23:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.014 02:23:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.014 02:23:33 -- setup/common.sh@18 -- # local node= 00:04:00.014 02:23:33 -- setup/common.sh@19 -- # local var val 00:04:00.014 02:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.014 02:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.014 02:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.014 02:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.014 02:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.014 02:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101222500 kB' 'MemAvailable: 105529328 kB' 'Buffers: 2696 kB' 'Cached: 18201548 kB' 'SwapCached: 0 kB' 'Active: 15156880 kB' 'Inactive: 3667940 kB' 'Active(anon): 14032936 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623844 kB' 'Mapped: 202244 kB' 'Shmem: 13412360 kB' 'KReclaimable: 555888 kB' 'Slab: 1435204 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879316 kB' 'KernelStack: 27472 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508456 kB' 'Committed_AS: 15538516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.014 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.014 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.015 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.015 02:23:33 -- setup/common.sh@33 -- # echo 0 00:04:00.015 02:23:33 -- setup/common.sh@33 -- # return 0 00:04:00.015 02:23:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:00.015 02:23:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:00.015 nr_hugepages=1025 00:04:00.015 02:23:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.015 resv_hugepages=0 00:04:00.015 02:23:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.015 surplus_hugepages=0 00:04:00.015 02:23:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.015 anon_hugepages=0 00:04:00.015 02:23:33 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.015 02:23:33 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:00.015 02:23:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.015 02:23:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.015 02:23:33 -- setup/common.sh@18 -- # local node= 00:04:00.015 02:23:33 -- setup/common.sh@19 -- # local var val 00:04:00.015 02:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.015 02:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.015 02:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.015 02:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.015 02:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.015 02:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.015 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101222944 kB' 'MemAvailable: 105529772 kB' 'Buffers: 2696 kB' 'Cached: 18201560 kB' 'SwapCached: 0 kB' 'Active: 15157004 kB' 'Inactive: 3667940 kB' 'Active(anon): 14033060 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623944 kB' 'Mapped: 202244 kB' 'Shmem: 13412372 kB' 'KReclaimable: 555888 kB' 'Slab: 1435204 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879316 kB' 'KernelStack: 27536 kB' 'PageTables: 10016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508456 kB' 'Committed_AS: 15538528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.016 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.016 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.017 02:23:33 -- setup/common.sh@33 -- # echo 1025 00:04:00.017 02:23:33 -- setup/common.sh@33 -- # return 0 00:04:00.017 02:23:33 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.017 02:23:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.017 02:23:33 -- setup/hugepages.sh@27 -- # local node 00:04:00.017 02:23:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.017 02:23:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.017 02:23:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.017 02:23:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:00.017 02:23:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.017 02:23:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.017 02:23:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.017 02:23:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.017 02:23:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.017 02:23:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.017 02:23:33 -- setup/common.sh@18 -- # local node=0 00:04:00.017 02:23:33 -- setup/common.sh@19 -- # local var val 00:04:00.017 02:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.017 02:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.017 02:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.017 02:23:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.017 02:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.017 02:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55339008 kB' 'MemUsed: 10320000 kB' 'SwapCached: 0 kB' 'Active: 6930508 kB' 'Inactive: 283868 kB' 'Active(anon): 6234580 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7135944 kB' 'Mapped: 67388 kB' 'AnonPages: 81656 kB' 'Shmem: 6156148 kB' 'KernelStack: 13224 kB' 'PageTables: 2928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679204 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.017 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.017 02:23:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.018 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.018 02:23:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@33 -- # echo 0 00:04:00.019 02:23:33 -- setup/common.sh@33 -- # return 0 00:04:00.019 02:23:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.019 02:23:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.019 02:23:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.019 02:23:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.019 02:23:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.019 02:23:33 -- setup/common.sh@18 -- # local node=1 00:04:00.019 02:23:33 -- setup/common.sh@19 -- # local var val 00:04:00.019 02:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.019 02:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.019 02:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.019 02:23:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.019 02:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.019 02:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679896 kB' 'MemFree: 45882796 kB' 'MemUsed: 14797100 kB' 'SwapCached: 0 kB' 'Active: 8226040 kB' 'Inactive: 3384072 kB' 'Active(anon): 7798024 kB' 'Inactive(anon): 0 kB' 'Active(file): 428016 kB' 'Inactive(file): 3384072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11068328 kB' 'Mapped: 134856 kB' 'AnonPages: 541820 kB' 'Shmem: 7256240 kB' 'KernelStack: 14280 kB' 'PageTables: 6920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362756 kB' 'Slab: 756000 kB' 'SReclaimable: 362756 kB' 'SUnreclaim: 393244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.019 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.019 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # continue 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.020 02:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.020 02:23:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.020 02:23:33 -- setup/common.sh@33 -- # echo 0 00:04:00.020 02:23:33 -- setup/common.sh@33 -- # return 0 00:04:00.020 02:23:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.020 02:23:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.020 02:23:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.020 02:23:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.020 02:23:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:00.020 node0=512 expecting 513 00:04:00.020 02:23:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.020 02:23:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.020 02:23:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.020 02:23:33 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:00.020 node1=513 expecting 512 00:04:00.020 02:23:33 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:00.020 00:04:00.020 real 0m3.579s 00:04:00.020 user 0m1.455s 00:04:00.020 sys 0m2.185s 00:04:00.020 02:23:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.020 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:04:00.020 ************************************ 00:04:00.020 END TEST odd_alloc 00:04:00.020 ************************************ 00:04:00.020 02:23:33 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:00.020 02:23:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.020 02:23:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.020 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:04:00.281 ************************************ 00:04:00.281 START TEST custom_alloc 00:04:00.281 ************************************ 00:04:00.281 02:23:33 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:00.281 02:23:33 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:00.281 02:23:33 -- setup/hugepages.sh@169 -- # local node 00:04:00.281 02:23:33 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:00.281 02:23:33 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:00.281 02:23:33 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:00.281 02:23:33 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.281 02:23:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.281 02:23:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.281 02:23:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.281 02:23:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.281 02:23:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.281 02:23:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.281 02:23:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:00.281 02:23:33 -- setup/hugepages.sh@83 -- # : 256 00:04:00.281 02:23:33 -- setup/hugepages.sh@84 -- # : 1 00:04:00.281 02:23:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:00.281 02:23:33 -- setup/hugepages.sh@83 -- # : 0 00:04:00.281 02:23:33 -- setup/hugepages.sh@84 -- # : 0 00:04:00.281 02:23:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:00.281 02:23:33 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:00.281 02:23:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.281 02:23:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.281 02:23:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.281 02:23:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.281 02:23:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.281 02:23:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.281 02:23:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.281 02:23:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.281 02:23:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.281 02:23:33 -- setup/hugepages.sh@78 -- # return 0 00:04:00.281 02:23:33 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:00.281 02:23:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.281 02:23:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.281 02:23:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.281 02:23:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.281 02:23:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:00.281 02:23:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.281 02:23:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.281 02:23:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.281 02:23:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.281 02:23:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.281 02:23:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:00.281 02:23:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.281 02:23:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.281 02:23:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.281 02:23:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:00.281 02:23:33 -- setup/hugepages.sh@78 -- # return 0 00:04:00.281 02:23:33 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:00.281 02:23:33 -- setup/hugepages.sh@187 -- # setup output 00:04:00.281 02:23:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.281 02:23:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.589 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.589 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.589 02:23:37 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.589 02:23:37 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.589 02:23:37 -- setup/hugepages.sh@89 -- # local node 00:04:03.589 02:23:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.589 02:23:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.589 02:23:37 -- setup/hugepages.sh@92 -- # local surp 00:04:03.589 02:23:37 -- setup/hugepages.sh@93 -- # local resv 00:04:03.589 02:23:37 -- setup/hugepages.sh@94 -- # local anon 00:04:03.589 02:23:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.589 02:23:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.589 02:23:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.589 02:23:37 -- setup/common.sh@18 -- # local node= 00:04:03.589 02:23:37 -- setup/common.sh@19 -- # local var val 00:04:03.589 02:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.589 02:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.589 02:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.589 02:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.589 02:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.589 02:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 100154680 kB' 'MemAvailable: 104461508 kB' 'Buffers: 2696 kB' 'Cached: 18201672 kB' 'SwapCached: 0 kB' 'Active: 15159072 kB' 'Inactive: 3667940 kB' 'Active(anon): 14035128 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625468 kB' 'Mapped: 202380 kB' 'Shmem: 13412484 kB' 'KReclaimable: 555888 kB' 'Slab: 1435836 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879948 kB' 'KernelStack: 27680 kB' 'PageTables: 10556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985192 kB' 'Committed_AS: 15539236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235880 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.589 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.589 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.590 02:23:37 -- setup/common.sh@33 -- # echo 0 00:04:03.590 02:23:37 -- setup/common.sh@33 -- # return 0 00:04:03.590 02:23:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.590 02:23:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.590 02:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.590 02:23:37 -- setup/common.sh@18 -- # local node= 00:04:03.590 02:23:37 -- setup/common.sh@19 -- # local var val 00:04:03.590 02:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.590 02:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.590 02:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.590 02:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.590 02:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.590 02:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 100154076 kB' 'MemAvailable: 104460904 kB' 'Buffers: 2696 kB' 'Cached: 18201672 kB' 'SwapCached: 0 kB' 'Active: 15159320 kB' 'Inactive: 3667940 kB' 'Active(anon): 14035376 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625780 kB' 'Mapped: 202856 kB' 'Shmem: 13412484 kB' 'KReclaimable: 555888 kB' 'Slab: 1435796 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879908 kB' 'KernelStack: 27520 kB' 'PageTables: 10060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985192 kB' 'Committed_AS: 15540340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235784 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.590 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.590 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.591 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.591 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.862 02:23:37 -- setup/common.sh@33 -- # echo 0 00:04:03.862 02:23:37 -- setup/common.sh@33 -- # return 0 00:04:03.862 02:23:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.862 02:23:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.862 02:23:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.862 02:23:37 -- setup/common.sh@18 -- # local node= 00:04:03.862 02:23:37 -- setup/common.sh@19 -- # local var val 00:04:03.862 02:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.862 02:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.862 02:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.862 02:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.862 02:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.862 02:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 100152040 kB' 'MemAvailable: 104458868 kB' 'Buffers: 2696 kB' 'Cached: 18201684 kB' 'SwapCached: 0 kB' 'Active: 15161804 kB' 'Inactive: 3667940 kB' 'Active(anon): 14037860 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628716 kB' 'Mapped: 202772 kB' 'Shmem: 13412496 kB' 'KReclaimable: 555888 kB' 'Slab: 1435760 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879872 kB' 'KernelStack: 27488 kB' 'PageTables: 10128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985192 kB' 'Committed_AS: 15543788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235816 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 02:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.863 02:23:37 -- setup/common.sh@33 -- # echo 0 00:04:03.863 02:23:37 -- setup/common.sh@33 -- # return 0 00:04:03.863 02:23:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.863 02:23:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.863 nr_hugepages=1536 00:04:03.863 02:23:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.863 resv_hugepages=0 00:04:03.863 02:23:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.863 surplus_hugepages=0 00:04:03.863 02:23:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.863 anon_hugepages=0 00:04:03.863 02:23:37 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.863 02:23:37 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.863 02:23:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.863 02:23:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.863 02:23:37 -- setup/common.sh@18 -- # local node= 00:04:03.863 02:23:37 -- setup/common.sh@19 -- # local var val 00:04:03.863 02:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.863 02:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.863 02:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.863 02:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.863 02:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.863 02:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 100150384 kB' 'MemAvailable: 104457212 kB' 'Buffers: 2696 kB' 'Cached: 18201700 kB' 'SwapCached: 0 kB' 'Active: 15163112 kB' 'Inactive: 3667940 kB' 'Active(anon): 14039168 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 629964 kB' 'Mapped: 203144 kB' 'Shmem: 13412512 kB' 'KReclaimable: 555888 kB' 'Slab: 1435712 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879824 kB' 'KernelStack: 27344 kB' 'PageTables: 10128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985192 kB' 'Committed_AS: 15542492 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235724 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.864 02:23:37 -- setup/common.sh@33 -- # echo 1536 00:04:03.864 02:23:37 -- setup/common.sh@33 -- # return 0 00:04:03.864 02:23:37 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.864 02:23:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.864 02:23:37 -- setup/hugepages.sh@27 -- # local node 00:04:03.864 02:23:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.864 02:23:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.864 02:23:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.864 02:23:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.864 02:23:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.864 02:23:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.864 02:23:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.864 02:23:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.864 02:23:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.864 02:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.864 02:23:37 -- setup/common.sh@18 -- # local node=0 00:04:03.864 02:23:37 -- setup/common.sh@19 -- # local var val 00:04:03.864 02:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.864 02:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.864 02:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.864 02:23:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.864 02:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.864 02:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55323908 kB' 'MemUsed: 10335100 kB' 'SwapCached: 0 kB' 'Active: 6932264 kB' 'Inactive: 283868 kB' 'Active(anon): 6236336 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136056 kB' 'Mapped: 67392 kB' 'AnonPages: 82968 kB' 'Shmem: 6156248 kB' 'KernelStack: 13288 kB' 'PageTables: 3188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679424 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@33 -- # echo 0 00:04:03.865 02:23:37 -- setup/common.sh@33 -- # return 0 00:04:03.865 02:23:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.865 02:23:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.865 02:23:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.865 02:23:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.865 02:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.865 02:23:37 -- setup/common.sh@18 -- # local node=1 00:04:03.865 02:23:37 -- setup/common.sh@19 -- # local var val 00:04:03.865 02:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.865 02:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.865 02:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.865 02:23:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.865 02:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.865 02:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679896 kB' 'MemFree: 44827136 kB' 'MemUsed: 15852760 kB' 'SwapCached: 0 kB' 'Active: 8225188 kB' 'Inactive: 3384072 kB' 'Active(anon): 7797172 kB' 'Inactive(anon): 0 kB' 'Active(file): 428016 kB' 'Inactive(file): 3384072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11068368 kB' 'Mapped: 134852 kB' 'AnonPages: 541032 kB' 'Shmem: 7256280 kB' 'KernelStack: 14104 kB' 'PageTables: 6764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 362756 kB' 'Slab: 756256 kB' 'SReclaimable: 362756 kB' 'SUnreclaim: 393500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # continue 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.865 02:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.865 02:23:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.865 02:23:37 -- setup/common.sh@33 -- # echo 0 00:04:03.865 02:23:37 -- setup/common.sh@33 -- # return 0 00:04:03.865 02:23:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.865 02:23:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.865 02:23:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.865 02:23:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.865 02:23:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.865 node0=512 expecting 512 00:04:03.865 02:23:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.865 02:23:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.865 02:23:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.865 02:23:37 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:03.865 node1=1024 expecting 1024 00:04:03.865 02:23:37 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:03.865 00:04:03.865 real 0m3.655s 00:04:03.865 user 0m1.410s 00:04:03.865 sys 0m2.301s 00:04:03.865 02:23:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.865 02:23:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.865 ************************************ 00:04:03.865 END TEST custom_alloc 00:04:03.865 ************************************ 00:04:03.865 02:23:37 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.865 02:23:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.865 02:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.865 02:23:37 -- common/autotest_common.sh@10 -- # set +x 00:04:04.171 ************************************ 00:04:04.171 START TEST no_shrink_alloc 00:04:04.171 ************************************ 00:04:04.171 02:23:37 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:04.171 02:23:37 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:04.171 02:23:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.171 02:23:37 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.171 02:23:37 -- setup/hugepages.sh@51 -- # shift 00:04:04.171 02:23:37 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.171 02:23:37 -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.171 02:23:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.171 02:23:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.171 02:23:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.171 02:23:37 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.171 02:23:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.171 02:23:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.171 02:23:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.171 02:23:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.171 02:23:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.171 02:23:37 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.171 02:23:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.171 02:23:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:04.171 02:23:37 -- setup/hugepages.sh@73 -- # return 0 00:04:04.171 02:23:37 -- setup/hugepages.sh@198 -- # setup output 00:04:04.171 02:23:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.171 02:23:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.752 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.752 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.752 02:23:40 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.752 02:23:40 -- setup/hugepages.sh@89 -- # local node 00:04:06.752 02:23:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.752 02:23:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.752 02:23:40 -- setup/hugepages.sh@92 -- # local surp 00:04:06.752 02:23:40 -- setup/hugepages.sh@93 -- # local resv 00:04:06.752 02:23:40 -- setup/hugepages.sh@94 -- # local anon 00:04:06.752 02:23:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.752 02:23:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.752 02:23:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.752 02:23:40 -- setup/common.sh@18 -- # local node= 00:04:06.752 02:23:40 -- setup/common.sh@19 -- # local var val 00:04:06.752 02:23:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.752 02:23:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.752 02:23:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.752 02:23:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.752 02:23:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.752 02:23:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101199884 kB' 'MemAvailable: 105506712 kB' 'Buffers: 2696 kB' 'Cached: 18201816 kB' 'SwapCached: 0 kB' 'Active: 15159892 kB' 'Inactive: 3667940 kB' 'Active(anon): 14035948 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626828 kB' 'Mapped: 202412 kB' 'Shmem: 13412628 kB' 'KReclaimable: 555888 kB' 'Slab: 1436076 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 880188 kB' 'KernelStack: 27600 kB' 'PageTables: 10200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235800 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.752 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.752 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.753 02:23:40 -- setup/common.sh@33 -- # echo 0 00:04:06.753 02:23:40 -- setup/common.sh@33 -- # return 0 00:04:06.753 02:23:40 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.753 02:23:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.753 02:23:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.753 02:23:40 -- setup/common.sh@18 -- # local node= 00:04:06.753 02:23:40 -- setup/common.sh@19 -- # local var val 00:04:06.753 02:23:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.753 02:23:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.753 02:23:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.753 02:23:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.753 02:23:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.753 02:23:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101198708 kB' 'MemAvailable: 105505536 kB' 'Buffers: 2696 kB' 'Cached: 18201820 kB' 'SwapCached: 0 kB' 'Active: 15160240 kB' 'Inactive: 3667940 kB' 'Active(anon): 14036296 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627124 kB' 'Mapped: 202412 kB' 'Shmem: 13412632 kB' 'KReclaimable: 555888 kB' 'Slab: 1436044 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 880156 kB' 'KernelStack: 27648 kB' 'PageTables: 10032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.753 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.753 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.754 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.754 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.755 02:23:40 -- setup/common.sh@33 -- # echo 0 00:04:06.755 02:23:40 -- setup/common.sh@33 -- # return 0 00:04:06.755 02:23:40 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.755 02:23:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.755 02:23:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.755 02:23:40 -- setup/common.sh@18 -- # local node= 00:04:06.755 02:23:40 -- setup/common.sh@19 -- # local var val 00:04:06.755 02:23:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.755 02:23:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.755 02:23:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.755 02:23:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.755 02:23:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.755 02:23:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101199192 kB' 'MemAvailable: 105506020 kB' 'Buffers: 2696 kB' 'Cached: 18201820 kB' 'SwapCached: 0 kB' 'Active: 15160068 kB' 'Inactive: 3667940 kB' 'Active(anon): 14036124 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626876 kB' 'Mapped: 202316 kB' 'Shmem: 13412632 kB' 'KReclaimable: 555888 kB' 'Slab: 1436056 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 880168 kB' 'KernelStack: 27552 kB' 'PageTables: 9792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.755 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.755 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.756 02:23:40 -- setup/common.sh@33 -- # echo 0 00:04:06.756 02:23:40 -- setup/common.sh@33 -- # return 0 00:04:06.756 02:23:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.756 02:23:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.756 nr_hugepages=1024 00:04:06.756 02:23:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.756 resv_hugepages=0 00:04:06.756 02:23:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.756 surplus_hugepages=0 00:04:06.756 02:23:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.756 anon_hugepages=0 00:04:06.756 02:23:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.756 02:23:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.756 02:23:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.756 02:23:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.756 02:23:40 -- setup/common.sh@18 -- # local node= 00:04:06.756 02:23:40 -- setup/common.sh@19 -- # local var val 00:04:06.756 02:23:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.756 02:23:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.756 02:23:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.756 02:23:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.756 02:23:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.756 02:23:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101198300 kB' 'MemAvailable: 105505128 kB' 'Buffers: 2696 kB' 'Cached: 18201824 kB' 'SwapCached: 0 kB' 'Active: 15160232 kB' 'Inactive: 3667940 kB' 'Active(anon): 14036288 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627036 kB' 'Mapped: 202316 kB' 'Shmem: 13412636 kB' 'KReclaimable: 555888 kB' 'Slab: 1436056 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 880168 kB' 'KernelStack: 27472 kB' 'PageTables: 9620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.756 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.756 02:23:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # continue 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.757 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.757 02:23:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.020 02:23:40 -- setup/common.sh@33 -- # echo 1024 00:04:07.020 02:23:40 -- setup/common.sh@33 -- # return 0 00:04:07.020 02:23:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.020 02:23:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.020 02:23:40 -- setup/hugepages.sh@27 -- # local node 00:04:07.020 02:23:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.020 02:23:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.020 02:23:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.020 02:23:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.020 02:23:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.020 02:23:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.020 02:23:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.020 02:23:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.020 02:23:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.020 02:23:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.020 02:23:40 -- setup/common.sh@18 -- # local node=0 00:04:07.020 02:23:40 -- setup/common.sh@19 -- # local var val 00:04:07.020 02:23:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.020 02:23:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.020 02:23:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.020 02:23:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.020 02:23:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.020 02:23:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54273840 kB' 'MemUsed: 11385168 kB' 'SwapCached: 0 kB' 'Active: 6934040 kB' 'Inactive: 283868 kB' 'Active(anon): 6238112 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136128 kB' 'Mapped: 67388 kB' 'AnonPages: 84988 kB' 'Shmem: 6156332 kB' 'KernelStack: 13288 kB' 'PageTables: 3092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679544 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.020 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.020 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # continue 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.021 02:23:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.021 02:23:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.021 02:23:40 -- setup/common.sh@33 -- # echo 0 00:04:07.021 02:23:40 -- setup/common.sh@33 -- # return 0 00:04:07.021 02:23:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.021 02:23:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.021 02:23:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.021 02:23:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.021 02:23:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.021 node0=1024 expecting 1024 00:04:07.021 02:23:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.021 02:23:40 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:07.021 02:23:40 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:07.021 02:23:40 -- setup/hugepages.sh@202 -- # setup output 00:04:07.021 02:23:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.021 02:23:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.328 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.328 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.328 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:10.328 02:23:43 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:10.328 02:23:43 -- setup/hugepages.sh@89 -- # local node 00:04:10.328 02:23:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.328 02:23:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.328 02:23:43 -- setup/hugepages.sh@92 -- # local surp 00:04:10.328 02:23:43 -- setup/hugepages.sh@93 -- # local resv 00:04:10.328 02:23:43 -- setup/hugepages.sh@94 -- # local anon 00:04:10.328 02:23:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.328 02:23:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.328 02:23:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.328 02:23:43 -- setup/common.sh@18 -- # local node= 00:04:10.328 02:23:43 -- setup/common.sh@19 -- # local var val 00:04:10.328 02:23:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.328 02:23:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.328 02:23:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.328 02:23:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.328 02:23:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.328 02:23:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101182696 kB' 'MemAvailable: 105489524 kB' 'Buffers: 2696 kB' 'Cached: 18201940 kB' 'SwapCached: 0 kB' 'Active: 15161760 kB' 'Inactive: 3667940 kB' 'Active(anon): 14037816 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628424 kB' 'Mapped: 202868 kB' 'Shmem: 13412752 kB' 'KReclaimable: 555888 kB' 'Slab: 1435844 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879956 kB' 'KernelStack: 27376 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15544260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235784 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.328 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.328 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.329 02:23:43 -- setup/common.sh@33 -- # echo 0 00:04:10.329 02:23:43 -- setup/common.sh@33 -- # return 0 00:04:10.329 02:23:43 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.329 02:23:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.329 02:23:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.329 02:23:43 -- setup/common.sh@18 -- # local node= 00:04:10.329 02:23:43 -- setup/common.sh@19 -- # local var val 00:04:10.329 02:23:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.329 02:23:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.329 02:23:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.329 02:23:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.329 02:23:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.329 02:23:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101178252 kB' 'MemAvailable: 105485080 kB' 'Buffers: 2696 kB' 'Cached: 18201944 kB' 'SwapCached: 0 kB' 'Active: 15165612 kB' 'Inactive: 3667940 kB' 'Active(anon): 14041668 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632316 kB' 'Mapped: 203264 kB' 'Shmem: 13412756 kB' 'KReclaimable: 555888 kB' 'Slab: 1435788 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879900 kB' 'KernelStack: 27472 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15546924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235804 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.329 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.329 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.330 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.330 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.331 02:23:43 -- setup/common.sh@33 -- # echo 0 00:04:10.331 02:23:43 -- setup/common.sh@33 -- # return 0 00:04:10.331 02:23:43 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.331 02:23:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.331 02:23:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.331 02:23:43 -- setup/common.sh@18 -- # local node= 00:04:10.331 02:23:43 -- setup/common.sh@19 -- # local var val 00:04:10.331 02:23:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.331 02:23:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.331 02:23:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.331 02:23:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.331 02:23:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.331 02:23:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101178708 kB' 'MemAvailable: 105485536 kB' 'Buffers: 2696 kB' 'Cached: 18201956 kB' 'SwapCached: 0 kB' 'Active: 15159400 kB' 'Inactive: 3667940 kB' 'Active(anon): 14035456 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626040 kB' 'Mapped: 202444 kB' 'Shmem: 13412768 kB' 'KReclaimable: 555888 kB' 'Slab: 1435812 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879924 kB' 'KernelStack: 27344 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235672 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.331 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.331 02:23:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.332 02:23:43 -- setup/common.sh@33 -- # echo 0 00:04:10.332 02:23:43 -- setup/common.sh@33 -- # return 0 00:04:10.332 02:23:43 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.332 02:23:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.332 nr_hugepages=1024 00:04:10.332 02:23:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.332 resv_hugepages=0 00:04:10.332 02:23:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.332 surplus_hugepages=0 00:04:10.332 02:23:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.332 anon_hugepages=0 00:04:10.332 02:23:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.332 02:23:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.332 02:23:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.332 02:23:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.332 02:23:43 -- setup/common.sh@18 -- # local node= 00:04:10.332 02:23:43 -- setup/common.sh@19 -- # local var val 00:04:10.332 02:23:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.332 02:23:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.332 02:23:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.332 02:23:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.332 02:23:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.332 02:23:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338904 kB' 'MemFree: 101178720 kB' 'MemAvailable: 105485548 kB' 'Buffers: 2696 kB' 'Cached: 18201968 kB' 'SwapCached: 0 kB' 'Active: 15159628 kB' 'Inactive: 3667940 kB' 'Active(anon): 14035684 kB' 'Inactive(anon): 0 kB' 'Active(file): 1123944 kB' 'Inactive(file): 3667940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626276 kB' 'Mapped: 202444 kB' 'Shmem: 13412780 kB' 'KReclaimable: 555888 kB' 'Slab: 1435812 kB' 'SReclaimable: 555888 kB' 'SUnreclaim: 879924 kB' 'KernelStack: 27584 kB' 'PageTables: 9968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509480 kB' 'Committed_AS: 15539196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 146304 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4543860 kB' 'DirectMap2M: 32884736 kB' 'DirectMap1G: 98566144 kB' 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.332 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.332 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.333 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.333 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.334 02:23:43 -- setup/common.sh@33 -- # echo 1024 00:04:10.334 02:23:43 -- setup/common.sh@33 -- # return 0 00:04:10.334 02:23:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.334 02:23:43 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.334 02:23:43 -- setup/hugepages.sh@27 -- # local node 00:04:10.334 02:23:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.334 02:23:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.334 02:23:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.334 02:23:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.334 02:23:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.334 02:23:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.334 02:23:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.334 02:23:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.334 02:23:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.334 02:23:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.334 02:23:43 -- setup/common.sh@18 -- # local node=0 00:04:10.334 02:23:43 -- setup/common.sh@19 -- # local var val 00:04:10.334 02:23:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.334 02:23:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.334 02:23:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.334 02:23:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.334 02:23:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.334 02:23:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54259796 kB' 'MemUsed: 11399212 kB' 'SwapCached: 0 kB' 'Active: 6933088 kB' 'Inactive: 283868 kB' 'Active(anon): 6237160 kB' 'Inactive(anon): 0 kB' 'Active(file): 695928 kB' 'Inactive(file): 283868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136164 kB' 'Mapped: 67464 kB' 'AnonPages: 84000 kB' 'Shmem: 6156368 kB' 'KernelStack: 13288 kB' 'PageTables: 3060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193132 kB' 'Slab: 679232 kB' 'SReclaimable: 193132 kB' 'SUnreclaim: 486100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.334 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.334 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # continue 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.335 02:23:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.335 02:23:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.335 02:23:43 -- setup/common.sh@33 -- # echo 0 00:04:10.335 02:23:43 -- setup/common.sh@33 -- # return 0 00:04:10.335 02:23:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.335 02:23:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.335 02:23:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.335 02:23:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.335 02:23:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.335 node0=1024 expecting 1024 00:04:10.335 02:23:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.335 00:04:10.335 real 0m6.365s 00:04:10.335 user 0m2.307s 00:04:10.335 sys 0m4.086s 00:04:10.335 02:23:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.335 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:04:10.335 ************************************ 00:04:10.335 END TEST no_shrink_alloc 00:04:10.335 ************************************ 00:04:10.596 02:23:43 -- setup/hugepages.sh@217 -- # clear_hp 00:04:10.596 02:23:43 -- setup/hugepages.sh@37 -- # local node hp 00:04:10.596 02:23:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.596 02:23:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.596 02:23:43 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.596 02:23:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.596 02:23:43 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.596 02:23:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.596 02:23:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.596 02:23:43 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.596 02:23:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.596 02:23:43 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.596 02:23:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.596 02:23:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.596 00:04:10.596 real 0m25.682s 00:04:10.596 user 0m9.878s 00:04:10.596 sys 0m15.963s 00:04:10.596 02:23:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.596 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:04:10.596 ************************************ 00:04:10.596 END TEST hugepages 00:04:10.596 ************************************ 00:04:10.596 02:23:44 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.596 02:23:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.596 02:23:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.596 02:23:44 -- common/autotest_common.sh@10 -- # set +x 00:04:10.596 ************************************ 00:04:10.596 START TEST driver 00:04:10.596 ************************************ 00:04:10.596 02:23:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.857 * Looking for test storage... 00:04:10.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.857 02:23:44 -- setup/driver.sh@68 -- # setup reset 00:04:10.857 02:23:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.857 02:23:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.150 02:23:48 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.150 02:23:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.150 02:23:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.150 02:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.150 ************************************ 00:04:16.150 START TEST guess_driver 00:04:16.150 ************************************ 00:04:16.150 02:23:49 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:16.150 02:23:49 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.150 02:23:49 -- setup/driver.sh@47 -- # local fail=0 00:04:16.150 02:23:49 -- setup/driver.sh@49 -- # pick_driver 00:04:16.150 02:23:49 -- setup/driver.sh@36 -- # vfio 00:04:16.150 02:23:49 -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.150 02:23:49 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.150 02:23:49 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.150 02:23:49 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:16.150 02:23:49 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.150 02:23:49 -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:16.150 02:23:49 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:16.150 02:23:49 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:16.150 02:23:49 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:16.150 02:23:49 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:16.150 02:23:49 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:16.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:16.150 02:23:49 -- setup/driver.sh@30 -- # return 0 00:04:16.150 02:23:49 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:16.150 02:23:49 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:16.150 02:23:49 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.150 02:23:49 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:16.150 Looking for driver=vfio-pci 00:04:16.150 02:23:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.150 02:23:49 -- setup/driver.sh@45 -- # setup output config 00:04:16.150 02:23:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.150 02:23:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.697 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.698 02:23:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:18.698 02:23:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:18.698 02:23:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.958 02:23:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:18.958 02:23:52 -- setup/driver.sh@65 -- # setup reset 00:04:18.958 02:23:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.958 02:23:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.250 00:04:24.250 real 0m7.761s 00:04:24.250 user 0m2.523s 00:04:24.250 sys 0m4.401s 00:04:24.250 02:23:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.250 02:23:56 -- common/autotest_common.sh@10 -- # set +x 00:04:24.250 ************************************ 00:04:24.250 END TEST guess_driver 00:04:24.250 ************************************ 00:04:24.250 00:04:24.250 real 0m12.662s 00:04:24.250 user 0m3.984s 00:04:24.250 sys 0m7.075s 00:04:24.250 02:23:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.250 02:23:56 -- common/autotest_common.sh@10 -- # set +x 00:04:24.250 ************************************ 00:04:24.250 END TEST driver 00:04:24.250 ************************************ 00:04:24.250 02:23:56 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.250 02:23:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.250 02:23:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.250 02:23:56 -- common/autotest_common.sh@10 -- # set +x 00:04:24.250 ************************************ 00:04:24.250 START TEST devices 00:04:24.250 ************************************ 00:04:24.250 02:23:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.250 * Looking for test storage... 00:04:24.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.250 02:23:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.250 02:23:57 -- setup/devices.sh@192 -- # setup reset 00:04:24.250 02:23:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.250 02:23:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.555 02:24:00 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:27.555 02:24:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:27.555 02:24:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:27.555 02:24:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:27.555 02:24:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:27.555 02:24:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:27.555 02:24:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:27.555 02:24:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.555 02:24:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:27.555 02:24:00 -- setup/devices.sh@196 -- # blocks=() 00:04:27.555 02:24:00 -- setup/devices.sh@196 -- # declare -a blocks 00:04:27.555 02:24:00 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:27.555 02:24:00 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:27.555 02:24:00 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:27.555 02:24:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.555 02:24:00 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:27.555 02:24:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:27.555 02:24:00 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:27.555 02:24:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:27.555 02:24:00 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:27.555 02:24:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:27.555 02:24:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:27.555 No valid GPT data, bailing 00:04:27.555 02:24:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.555 02:24:00 -- scripts/common.sh@391 -- # pt= 00:04:27.555 02:24:00 -- scripts/common.sh@392 -- # return 1 00:04:27.555 02:24:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:27.555 02:24:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:27.555 02:24:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:27.555 02:24:00 -- setup/common.sh@80 -- # echo 1920383410176 00:04:27.556 02:24:00 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:27.556 02:24:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.556 02:24:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:27.556 02:24:00 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:27.556 02:24:00 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:27.556 02:24:00 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:27.556 02:24:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.556 02:24:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.556 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:04:27.556 ************************************ 00:04:27.556 START TEST nvme_mount 00:04:27.556 ************************************ 00:04:27.556 02:24:01 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:27.556 02:24:01 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:27.556 02:24:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:27.556 02:24:01 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.556 02:24:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.556 02:24:01 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:27.556 02:24:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.556 02:24:01 -- setup/common.sh@40 -- # local part_no=1 00:04:27.556 02:24:01 -- setup/common.sh@41 -- # local size=1073741824 00:04:27.556 02:24:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.556 02:24:01 -- setup/common.sh@44 -- # parts=() 00:04:27.556 02:24:01 -- setup/common.sh@44 -- # local parts 00:04:27.556 02:24:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.556 02:24:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.556 02:24:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.556 02:24:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.556 02:24:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.556 02:24:01 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.556 02:24:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.556 02:24:01 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:28.498 Creating new GPT entries in memory. 00:04:28.498 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.498 other utilities. 00:04:28.498 02:24:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.498 02:24:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.498 02:24:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.498 02:24:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.498 02:24:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.925 Creating new GPT entries in memory. 00:04:29.925 The operation has completed successfully. 00:04:29.925 02:24:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.925 02:24:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.925 02:24:03 -- setup/common.sh@62 -- # wait 4086913 00:04:29.925 02:24:03 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.925 02:24:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:29.925 02:24:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.925 02:24:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:29.925 02:24:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:29.925 02:24:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.925 02:24:03 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.925 02:24:03 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.925 02:24:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:29.925 02:24:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.925 02:24:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.925 02:24:03 -- setup/devices.sh@53 -- # local found=0 00:04:29.925 02:24:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.925 02:24:03 -- setup/devices.sh@56 -- # : 00:04:29.925 02:24:03 -- setup/devices.sh@59 -- # local pci status 00:04:29.925 02:24:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.925 02:24:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.925 02:24:03 -- setup/devices.sh@47 -- # setup output config 00:04:29.925 02:24:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.925 02:24:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:33.227 02:24:06 -- setup/devices.sh@63 -- # found=1 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.227 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.227 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.228 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.228 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.228 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.228 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.228 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.228 02:24:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.228 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.228 02:24:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.228 02:24:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:33.228 02:24:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.228 02:24:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.228 02:24:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.228 02:24:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:33.228 02:24:06 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.228 02:24:06 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.228 02:24:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.228 02:24:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.228 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.228 02:24:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.228 02:24:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.228 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.228 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.228 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.228 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.228 02:24:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:33.228 02:24:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:33.228 02:24:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.228 02:24:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:33.228 02:24:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:33.228 02:24:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.228 02:24:06 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.228 02:24:06 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.228 02:24:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.228 02:24:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.228 02:24:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.228 02:24:06 -- setup/devices.sh@53 -- # local found=0 00:04:33.228 02:24:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.228 02:24:06 -- setup/devices.sh@56 -- # : 00:04:33.228 02:24:06 -- setup/devices.sh@59 -- # local pci status 00:04:33.228 02:24:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.228 02:24:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.228 02:24:06 -- setup/devices.sh@47 -- # setup output config 00:04:33.228 02:24:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.228 02:24:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.537 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.537 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.537 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.537 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.537 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.537 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.538 02:24:09 -- setup/devices.sh@63 -- # found=1 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.538 02:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.538 02:24:10 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.538 02:24:10 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.538 02:24:10 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.538 02:24:10 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.538 02:24:10 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.538 02:24:10 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:36.538 02:24:10 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:36.538 02:24:10 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.538 02:24:10 -- setup/devices.sh@50 -- # local mount_point= 00:04:36.538 02:24:10 -- setup/devices.sh@51 -- # local test_file= 00:04:36.538 02:24:10 -- setup/devices.sh@53 -- # local found=0 00:04:36.538 02:24:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.538 02:24:10 -- setup/devices.sh@59 -- # local pci status 00:04:36.538 02:24:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.538 02:24:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:36.538 02:24:10 -- setup/devices.sh@47 -- # setup output config 00:04:36.538 02:24:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.538 02:24:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.845 02:24:13 -- setup/devices.sh@63 -- # found=1 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.845 02:24:13 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.845 02:24:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.107 02:24:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.107 02:24:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.107 02:24:13 -- setup/devices.sh@68 -- # return 0 00:04:40.107 02:24:13 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.107 02:24:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.107 02:24:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.107 02:24:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.107 02:24:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.107 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.107 00:04:40.107 real 0m12.419s 00:04:40.107 user 0m3.730s 00:04:40.107 sys 0m6.546s 00:04:40.107 02:24:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.107 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:04:40.107 ************************************ 00:04:40.107 END TEST nvme_mount 00:04:40.107 ************************************ 00:04:40.107 02:24:13 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.107 02:24:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.107 02:24:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.107 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:04:40.107 ************************************ 00:04:40.108 START TEST dm_mount 00:04:40.108 ************************************ 00:04:40.108 02:24:13 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:40.108 02:24:13 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.108 02:24:13 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.108 02:24:13 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.108 02:24:13 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.108 02:24:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.108 02:24:13 -- setup/common.sh@40 -- # local part_no=2 00:04:40.108 02:24:13 -- setup/common.sh@41 -- # local size=1073741824 00:04:40.108 02:24:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.108 02:24:13 -- setup/common.sh@44 -- # parts=() 00:04:40.108 02:24:13 -- setup/common.sh@44 -- # local parts 00:04:40.108 02:24:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.108 02:24:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.108 02:24:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.108 02:24:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.108 02:24:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.108 02:24:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.108 02:24:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.108 02:24:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.108 02:24:13 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.108 02:24:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.108 02:24:13 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.495 Creating new GPT entries in memory. 00:04:41.495 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.495 other utilities. 00:04:41.495 02:24:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.495 02:24:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.495 02:24:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.495 02:24:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.495 02:24:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.438 Creating new GPT entries in memory. 00:04:42.438 The operation has completed successfully. 00:04:42.438 02:24:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:42.438 02:24:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.438 02:24:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.438 02:24:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.438 02:24:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.384 The operation has completed successfully. 00:04:43.384 02:24:16 -- setup/common.sh@57 -- # (( part++ )) 00:04:43.384 02:24:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.384 02:24:16 -- setup/common.sh@62 -- # wait 4092357 00:04:43.384 02:24:16 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.384 02:24:16 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.384 02:24:16 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.384 02:24:16 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.384 02:24:16 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.384 02:24:16 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.384 02:24:16 -- setup/devices.sh@161 -- # break 00:04:43.384 02:24:16 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.384 02:24:16 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.384 02:24:16 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:43.384 02:24:16 -- setup/devices.sh@166 -- # dm=dm-0 00:04:43.384 02:24:16 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:43.384 02:24:16 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:43.384 02:24:16 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.384 02:24:16 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.384 02:24:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.384 02:24:16 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.384 02:24:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.384 02:24:16 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.384 02:24:16 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.384 02:24:16 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:43.384 02:24:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.384 02:24:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.384 02:24:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.384 02:24:16 -- setup/devices.sh@53 -- # local found=0 00:04:43.384 02:24:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.384 02:24:16 -- setup/devices.sh@56 -- # : 00:04:43.384 02:24:16 -- setup/devices.sh@59 -- # local pci status 00:04:43.384 02:24:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.384 02:24:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:43.384 02:24:16 -- setup/devices.sh@47 -- # setup output config 00:04:43.384 02:24:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.384 02:24:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.691 02:24:19 -- setup/devices.sh@63 -- # found=1 00:04:46.691 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.691 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:19 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.692 02:24:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.692 02:24:20 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.692 02:24:20 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.692 02:24:20 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.692 02:24:20 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.692 02:24:20 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.692 02:24:20 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.692 02:24:20 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:46.692 02:24:20 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.692 02:24:20 -- setup/devices.sh@50 -- # local mount_point= 00:04:46.692 02:24:20 -- setup/devices.sh@51 -- # local test_file= 00:04:46.692 02:24:20 -- setup/devices.sh@53 -- # local found=0 00:04:46.692 02:24:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.692 02:24:20 -- setup/devices.sh@59 -- # local pci status 00:04:46.692 02:24:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.692 02:24:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:46.692 02:24:20 -- setup/devices.sh@47 -- # setup output config 00:04:46.692 02:24:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.692 02:24:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.073 02:24:23 -- setup/devices.sh@63 -- # found=1 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.073 02:24:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.073 02:24:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.073 02:24:23 -- setup/devices.sh@68 -- # return 0 00:04:50.073 02:24:23 -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.073 02:24:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.073 02:24:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.073 02:24:23 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.073 02:24:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.073 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.073 02:24:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.073 00:04:50.073 real 0m9.805s 00:04:50.073 user 0m2.453s 00:04:50.073 sys 0m4.410s 00:04:50.073 02:24:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.073 02:24:23 -- common/autotest_common.sh@10 -- # set +x 00:04:50.073 ************************************ 00:04:50.073 END TEST dm_mount 00:04:50.073 ************************************ 00:04:50.073 02:24:23 -- setup/devices.sh@1 -- # cleanup 00:04:50.073 02:24:23 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.073 02:24:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.073 02:24:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.073 02:24:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.073 02:24:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.334 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.334 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.334 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.334 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.334 02:24:23 -- setup/devices.sh@12 -- # cleanup_dm 00:04:50.334 02:24:23 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.334 02:24:23 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.334 02:24:23 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.334 02:24:23 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.334 02:24:23 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.334 02:24:23 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:50.334 00:04:50.334 real 0m26.804s 00:04:50.334 user 0m7.824s 00:04:50.334 sys 0m13.737s 00:04:50.334 02:24:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.334 02:24:23 -- common/autotest_common.sh@10 -- # set +x 00:04:50.334 ************************************ 00:04:50.334 END TEST devices 00:04:50.334 ************************************ 00:04:50.334 00:04:50.334 real 1m30.471s 00:04:50.334 user 0m30.104s 00:04:50.334 sys 0m51.414s 00:04:50.334 02:24:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.334 02:24:23 -- common/autotest_common.sh@10 -- # set +x 00:04:50.334 ************************************ 00:04:50.334 END TEST setup.sh 00:04:50.334 ************************************ 00:04:50.334 02:24:23 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:53.640 Hugepages 00:04:53.640 node hugesize free / total 00:04:53.640 node0 1048576kB 0 / 0 00:04:53.640 node0 2048kB 2048 / 2048 00:04:53.640 node1 1048576kB 0 / 0 00:04:53.640 node1 2048kB 0 / 0 00:04:53.640 00:04:53.640 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.640 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:53.640 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:53.901 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:53.901 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:53.901 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:53.901 02:24:27 -- spdk/autotest.sh@130 -- # uname -s 00:04:53.901 02:24:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:53.901 02:24:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:53.901 02:24:27 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.205 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.205 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.592 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:58.854 02:24:32 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:59.798 02:24:33 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:59.798 02:24:33 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:59.798 02:24:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:59.798 02:24:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:59.798 02:24:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:59.798 02:24:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:59.798 02:24:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.798 02:24:33 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.798 02:24:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:59.798 02:24:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:59.798 02:24:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:59.798 02:24:33 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.102 Waiting for block devices as requested 00:05:03.102 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.102 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.363 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:03.363 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.363 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.625 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.625 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.625 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.625 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.888 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.888 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.888 02:24:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:03.888 02:24:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:05:03.888 02:24:37 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:03.888 02:24:37 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:03.888 02:24:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:03.888 02:24:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:03.888 02:24:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:03.888 02:24:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:03.888 02:24:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:03.888 02:24:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:03.888 02:24:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:03.888 02:24:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:03.888 02:24:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:03.888 02:24:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:03.888 02:24:37 -- common/autotest_common.sh@1543 -- # continue 00:05:03.888 02:24:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.888 02:24:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:03.888 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.888 02:24:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.888 02:24:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:03.888 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.888 02:24:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.193 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.193 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:07.193 02:24:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.193 02:24:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:07.193 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.193 02:24:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.193 02:24:40 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:07.193 02:24:40 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.193 02:24:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:07.193 02:24:40 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:07.193 02:24:40 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:07.193 02:24:40 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:07.193 02:24:40 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:07.193 02:24:40 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.193 02:24:40 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.193 02:24:40 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:07.454 02:24:40 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:07.454 02:24:40 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:05:07.454 02:24:40 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:07.454 02:24:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:07.454 02:24:40 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:07.454 02:24:40 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:07.454 02:24:40 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:05:07.454 02:24:40 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:05:07.454 02:24:40 -- common/autotest_common.sh@1579 -- # return 0 00:05:07.454 02:24:40 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:07.454 02:24:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:07.454 02:24:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.454 02:24:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.454 02:24:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:07.454 02:24:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:07.454 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.454 02:24:40 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.454 02:24:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.454 02:24:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.454 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.454 ************************************ 00:05:07.454 START TEST env 00:05:07.454 ************************************ 00:05:07.454 02:24:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.721 * Looking for test storage... 00:05:07.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:07.722 02:24:41 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.722 02:24:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.722 02:24:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.722 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:07.722 ************************************ 00:05:07.722 START TEST env_memory 00:05:07.722 ************************************ 00:05:07.722 02:24:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.722 00:05:07.722 00:05:07.722 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.722 http://cunit.sourceforge.net/ 00:05:07.722 00:05:07.722 00:05:07.722 Suite: memory 00:05:07.722 Test: alloc and free memory map ...[2024-04-27 02:24:41.332788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:07.983 passed 00:05:07.983 Test: mem map translation ...[2024-04-27 02:24:41.358231] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:07.983 [2024-04-27 02:24:41.358249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:07.983 [2024-04-27 02:24:41.358300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:07.983 [2024-04-27 02:24:41.358308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:07.983 passed 00:05:07.983 Test: mem map registration ...[2024-04-27 02:24:41.413385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:07.983 [2024-04-27 02:24:41.413400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:07.983 passed 00:05:07.983 Test: mem map adjacent registrations ...passed 00:05:07.983 00:05:07.983 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.983 suites 1 1 n/a 0 0 00:05:07.983 tests 4 4 4 0 0 00:05:07.983 asserts 152 152 152 0 n/a 00:05:07.983 00:05:07.983 Elapsed time = 0.192 seconds 00:05:07.983 00:05:07.983 real 0m0.205s 00:05:07.983 user 0m0.194s 00:05:07.983 sys 0m0.010s 00:05:07.983 02:24:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.983 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:07.983 ************************************ 00:05:07.983 END TEST env_memory 00:05:07.983 ************************************ 00:05:07.983 02:24:41 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.983 02:24:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.983 02:24:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.983 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.245 ************************************ 00:05:08.245 START TEST env_vtophys 00:05:08.245 ************************************ 00:05:08.245 02:24:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.245 EAL: lib.eal log level changed from notice to debug 00:05:08.245 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.245 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.245 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.245 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.245 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.245 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.245 EAL: Detected lcore 6 as core 6 on socket 0 00:05:08.245 EAL: Detected lcore 7 as core 7 on socket 0 00:05:08.245 EAL: Detected lcore 8 as core 8 on socket 0 00:05:08.245 EAL: Detected lcore 9 as core 9 on socket 0 00:05:08.245 EAL: Detected lcore 10 as core 10 on socket 0 00:05:08.245 EAL: Detected lcore 11 as core 11 on socket 0 00:05:08.245 EAL: Detected lcore 12 as core 12 on socket 0 00:05:08.245 EAL: Detected lcore 13 as core 13 on socket 0 00:05:08.245 EAL: Detected lcore 14 as core 14 on socket 0 00:05:08.245 EAL: Detected lcore 15 as core 15 on socket 0 00:05:08.245 EAL: Detected lcore 16 as core 16 on socket 0 00:05:08.245 EAL: Detected lcore 17 as core 17 on socket 0 00:05:08.245 EAL: Detected lcore 18 as core 18 on socket 0 00:05:08.245 EAL: Detected lcore 19 as core 19 on socket 0 00:05:08.245 EAL: Detected lcore 20 as core 20 on socket 0 00:05:08.245 EAL: Detected lcore 21 as core 21 on socket 0 00:05:08.245 EAL: Detected lcore 22 as core 22 on socket 0 00:05:08.245 EAL: Detected lcore 23 as core 23 on socket 0 00:05:08.245 EAL: Detected lcore 24 as core 24 on socket 0 00:05:08.245 EAL: Detected lcore 25 as core 25 on socket 0 00:05:08.245 EAL: Detected lcore 26 as core 26 on socket 0 00:05:08.245 EAL: Detected lcore 27 as core 27 on socket 0 00:05:08.245 EAL: Detected lcore 28 as core 28 on socket 0 00:05:08.245 EAL: Detected lcore 29 as core 29 on socket 0 00:05:08.245 EAL: Detected lcore 30 as core 30 on socket 0 00:05:08.245 EAL: Detected lcore 31 as core 31 on socket 0 00:05:08.245 EAL: Detected lcore 32 as core 32 on socket 0 00:05:08.245 EAL: Detected lcore 33 as core 33 on socket 0 00:05:08.245 EAL: Detected lcore 34 as core 34 on socket 0 00:05:08.245 EAL: Detected lcore 35 as core 35 on socket 0 00:05:08.245 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.245 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.245 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.245 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.245 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.245 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.245 EAL: Detected lcore 42 as core 6 on socket 1 00:05:08.245 EAL: Detected lcore 43 as core 7 on socket 1 00:05:08.245 EAL: Detected lcore 44 as core 8 on socket 1 00:05:08.245 EAL: Detected lcore 45 as core 9 on socket 1 00:05:08.245 EAL: Detected lcore 46 as core 10 on socket 1 00:05:08.245 EAL: Detected lcore 47 as core 11 on socket 1 00:05:08.245 EAL: Detected lcore 48 as core 12 on socket 1 00:05:08.245 EAL: Detected lcore 49 as core 13 on socket 1 00:05:08.245 EAL: Detected lcore 50 as core 14 on socket 1 00:05:08.245 EAL: Detected lcore 51 as core 15 on socket 1 00:05:08.245 EAL: Detected lcore 52 as core 16 on socket 1 00:05:08.245 EAL: Detected lcore 53 as core 17 on socket 1 00:05:08.245 EAL: Detected lcore 54 as core 18 on socket 1 00:05:08.245 EAL: Detected lcore 55 as core 19 on socket 1 00:05:08.245 EAL: Detected lcore 56 as core 20 on socket 1 00:05:08.245 EAL: Detected lcore 57 as core 21 on socket 1 00:05:08.245 EAL: Detected lcore 58 as core 22 on socket 1 00:05:08.245 EAL: Detected lcore 59 as core 23 on socket 1 00:05:08.245 EAL: Detected lcore 60 as core 24 on socket 1 00:05:08.245 EAL: Detected lcore 61 as core 25 on socket 1 00:05:08.245 EAL: Detected lcore 62 as core 26 on socket 1 00:05:08.245 EAL: Detected lcore 63 as core 27 on socket 1 00:05:08.245 EAL: Detected lcore 64 as core 28 on socket 1 00:05:08.245 EAL: Detected lcore 65 as core 29 on socket 1 00:05:08.245 EAL: Detected lcore 66 as core 30 on socket 1 00:05:08.245 EAL: Detected lcore 67 as core 31 on socket 1 00:05:08.245 EAL: Detected lcore 68 as core 32 on socket 1 00:05:08.245 EAL: Detected lcore 69 as core 33 on socket 1 00:05:08.245 EAL: Detected lcore 70 as core 34 on socket 1 00:05:08.245 EAL: Detected lcore 71 as core 35 on socket 1 00:05:08.245 EAL: Detected lcore 72 as core 0 on socket 0 00:05:08.245 EAL: Detected lcore 73 as core 1 on socket 0 00:05:08.245 EAL: Detected lcore 74 as core 2 on socket 0 00:05:08.245 EAL: Detected lcore 75 as core 3 on socket 0 00:05:08.245 EAL: Detected lcore 76 as core 4 on socket 0 00:05:08.245 EAL: Detected lcore 77 as core 5 on socket 0 00:05:08.245 EAL: Detected lcore 78 as core 6 on socket 0 00:05:08.245 EAL: Detected lcore 79 as core 7 on socket 0 00:05:08.245 EAL: Detected lcore 80 as core 8 on socket 0 00:05:08.245 EAL: Detected lcore 81 as core 9 on socket 0 00:05:08.245 EAL: Detected lcore 82 as core 10 on socket 0 00:05:08.245 EAL: Detected lcore 83 as core 11 on socket 0 00:05:08.245 EAL: Detected lcore 84 as core 12 on socket 0 00:05:08.245 EAL: Detected lcore 85 as core 13 on socket 0 00:05:08.245 EAL: Detected lcore 86 as core 14 on socket 0 00:05:08.245 EAL: Detected lcore 87 as core 15 on socket 0 00:05:08.245 EAL: Detected lcore 88 as core 16 on socket 0 00:05:08.245 EAL: Detected lcore 89 as core 17 on socket 0 00:05:08.245 EAL: Detected lcore 90 as core 18 on socket 0 00:05:08.245 EAL: Detected lcore 91 as core 19 on socket 0 00:05:08.245 EAL: Detected lcore 92 as core 20 on socket 0 00:05:08.245 EAL: Detected lcore 93 as core 21 on socket 0 00:05:08.245 EAL: Detected lcore 94 as core 22 on socket 0 00:05:08.245 EAL: Detected lcore 95 as core 23 on socket 0 00:05:08.245 EAL: Detected lcore 96 as core 24 on socket 0 00:05:08.245 EAL: Detected lcore 97 as core 25 on socket 0 00:05:08.245 EAL: Detected lcore 98 as core 26 on socket 0 00:05:08.245 EAL: Detected lcore 99 as core 27 on socket 0 00:05:08.245 EAL: Detected lcore 100 as core 28 on socket 0 00:05:08.245 EAL: Detected lcore 101 as core 29 on socket 0 00:05:08.245 EAL: Detected lcore 102 as core 30 on socket 0 00:05:08.245 EAL: Detected lcore 103 as core 31 on socket 0 00:05:08.245 EAL: Detected lcore 104 as core 32 on socket 0 00:05:08.245 EAL: Detected lcore 105 as core 33 on socket 0 00:05:08.245 EAL: Detected lcore 106 as core 34 on socket 0 00:05:08.245 EAL: Detected lcore 107 as core 35 on socket 0 00:05:08.245 EAL: Detected lcore 108 as core 0 on socket 1 00:05:08.245 EAL: Detected lcore 109 as core 1 on socket 1 00:05:08.245 EAL: Detected lcore 110 as core 2 on socket 1 00:05:08.245 EAL: Detected lcore 111 as core 3 on socket 1 00:05:08.245 EAL: Detected lcore 112 as core 4 on socket 1 00:05:08.245 EAL: Detected lcore 113 as core 5 on socket 1 00:05:08.245 EAL: Detected lcore 114 as core 6 on socket 1 00:05:08.245 EAL: Detected lcore 115 as core 7 on socket 1 00:05:08.245 EAL: Detected lcore 116 as core 8 on socket 1 00:05:08.245 EAL: Detected lcore 117 as core 9 on socket 1 00:05:08.245 EAL: Detected lcore 118 as core 10 on socket 1 00:05:08.245 EAL: Detected lcore 119 as core 11 on socket 1 00:05:08.245 EAL: Detected lcore 120 as core 12 on socket 1 00:05:08.245 EAL: Detected lcore 121 as core 13 on socket 1 00:05:08.245 EAL: Detected lcore 122 as core 14 on socket 1 00:05:08.245 EAL: Detected lcore 123 as core 15 on socket 1 00:05:08.245 EAL: Detected lcore 124 as core 16 on socket 1 00:05:08.245 EAL: Detected lcore 125 as core 17 on socket 1 00:05:08.245 EAL: Detected lcore 126 as core 18 on socket 1 00:05:08.245 EAL: Detected lcore 127 as core 19 on socket 1 00:05:08.245 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:08.245 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:08.245 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:08.245 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:08.245 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:08.245 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:08.245 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:08.245 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:08.245 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:08.245 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:08.245 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:08.245 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:08.245 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:08.245 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:08.245 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:08.245 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:08.245 EAL: Maximum logical cores by configuration: 128 00:05:08.245 EAL: Detected CPU lcores: 128 00:05:08.245 EAL: Detected NUMA nodes: 2 00:05:08.245 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:08.245 EAL: Detected shared linkage of DPDK 00:05:08.245 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.245 EAL: Bus pci wants IOVA as 'DC' 00:05:08.245 EAL: Buses did not request a specific IOVA mode. 00:05:08.245 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.245 EAL: Selected IOVA mode 'VA' 00:05:08.245 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.245 EAL: Probing VFIO support... 00:05:08.245 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.245 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.245 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.245 EAL: VFIO support initialized 00:05:08.245 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.245 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.245 EAL: Setting up physically contiguous memory... 00:05:08.245 EAL: Setting maximum number of open files to 524288 00:05:08.245 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.245 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.245 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.245 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.245 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.245 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.245 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.245 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.245 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.245 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.245 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.245 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.245 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.245 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.245 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.245 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.245 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.246 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.246 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.246 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.246 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.246 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.246 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.246 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.246 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.246 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.246 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.246 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.246 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.246 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.246 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.246 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.246 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.246 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.246 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.246 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.246 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.246 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.246 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.246 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.246 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.246 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.246 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.246 EAL: Hugepages will be freed exactly as allocated. 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: TSC frequency is ~2400000 KHz 00:05:08.246 EAL: Main lcore 0 is ready (tid=7fa77f3b9a00;cpuset=[0]) 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 0 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.246 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.246 00:05:08.246 00:05:08.246 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.246 http://cunit.sourceforge.net/ 00:05:08.246 00:05:08.246 00:05:08.246 Suite: components_suite 00:05:08.246 Test: vtophys_malloc_test ...passed 00:05:08.246 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.246 EAL: Trying to obtain current memory policy. 00:05:08.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.246 EAL: Restoring previous memory policy: 4 00:05:08.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.246 EAL: request: mp_malloc_sync 00:05:08.246 EAL: No shared files mode enabled, IPC is disabled 00:05:08.246 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.508 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.508 EAL: request: mp_malloc_sync 00:05:08.508 EAL: No shared files mode enabled, IPC is disabled 00:05:08.508 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.508 EAL: Trying to obtain current memory policy. 00:05:08.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.508 EAL: Restoring previous memory policy: 4 00:05:08.508 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.508 EAL: request: mp_malloc_sync 00:05:08.508 EAL: No shared files mode enabled, IPC is disabled 00:05:08.508 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.508 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.508 EAL: request: mp_malloc_sync 00:05:08.508 EAL: No shared files mode enabled, IPC is disabled 00:05:08.508 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.508 EAL: Trying to obtain current memory policy. 00:05:08.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.768 EAL: Restoring previous memory policy: 4 00:05:08.768 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.768 EAL: request: mp_malloc_sync 00:05:08.768 EAL: No shared files mode enabled, IPC is disabled 00:05:08.768 EAL: Heap on socket 0 was expanded by 1026MB 00:05:08.768 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.030 EAL: request: mp_malloc_sync 00:05:09.030 EAL: No shared files mode enabled, IPC is disabled 00:05:09.030 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.030 passed 00:05:09.030 00:05:09.030 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.030 suites 1 1 n/a 0 0 00:05:09.030 tests 2 2 2 0 0 00:05:09.030 asserts 497 497 497 0 n/a 00:05:09.030 00:05:09.030 Elapsed time = 0.642 seconds 00:05:09.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.030 EAL: request: mp_malloc_sync 00:05:09.030 EAL: No shared files mode enabled, IPC is disabled 00:05:09.030 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.030 EAL: No shared files mode enabled, IPC is disabled 00:05:09.030 EAL: No shared files mode enabled, IPC is disabled 00:05:09.030 EAL: No shared files mode enabled, IPC is disabled 00:05:09.030 00:05:09.030 real 0m0.754s 00:05:09.030 user 0m0.403s 00:05:09.030 sys 0m0.328s 00:05:09.030 02:24:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.030 02:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.030 ************************************ 00:05:09.030 END TEST env_vtophys 00:05:09.030 ************************************ 00:05:09.030 02:24:42 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.030 02:24:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.030 02:24:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.030 02:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.030 ************************************ 00:05:09.030 START TEST env_pci 00:05:09.030 ************************************ 00:05:09.030 02:24:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.030 00:05:09.030 00:05:09.030 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.030 http://cunit.sourceforge.net/ 00:05:09.030 00:05:09.030 00:05:09.030 Suite: pci 00:05:09.030 Test: pci_hook ...[2024-04-27 02:24:42.619115] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4103170 has claimed it 00:05:09.292 EAL: Cannot find device (10000:00:01.0) 00:05:09.292 EAL: Failed to attach device on primary process 00:05:09.292 passed 00:05:09.292 00:05:09.292 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.292 suites 1 1 n/a 0 0 00:05:09.292 tests 1 1 1 0 0 00:05:09.292 asserts 25 25 25 0 n/a 00:05:09.292 00:05:09.292 Elapsed time = 0.032 seconds 00:05:09.292 00:05:09.292 real 0m0.052s 00:05:09.292 user 0m0.011s 00:05:09.292 sys 0m0.040s 00:05:09.292 02:24:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.292 02:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.292 ************************************ 00:05:09.292 END TEST env_pci 00:05:09.292 ************************************ 00:05:09.292 02:24:42 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.292 02:24:42 -- env/env.sh@15 -- # uname 00:05:09.292 02:24:42 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.292 02:24:42 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.292 02:24:42 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.292 02:24:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:09.292 02:24:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.292 02:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.292 ************************************ 00:05:09.292 START TEST env_dpdk_post_init 00:05:09.292 ************************************ 00:05:09.292 02:24:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.292 EAL: Detected CPU lcores: 128 00:05:09.292 EAL: Detected NUMA nodes: 2 00:05:09.292 EAL: Detected shared linkage of DPDK 00:05:09.292 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.292 EAL: Selected IOVA mode 'VA' 00:05:09.292 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.292 EAL: VFIO support initialized 00:05:09.292 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.554 EAL: Using IOMMU type 1 (Type 1) 00:05:09.554 EAL: Ignore mapping IO port bar(1) 00:05:09.554 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:09.815 EAL: Ignore mapping IO port bar(1) 00:05:09.815 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:10.075 EAL: Ignore mapping IO port bar(1) 00:05:10.075 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.336 EAL: Ignore mapping IO port bar(1) 00:05:10.336 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:10.336 EAL: Ignore mapping IO port bar(1) 00:05:10.596 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:10.596 EAL: Ignore mapping IO port bar(1) 00:05:10.856 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:10.856 EAL: Ignore mapping IO port bar(1) 00:05:11.117 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:11.117 EAL: Ignore mapping IO port bar(1) 00:05:11.117 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:11.378 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:11.638 EAL: Ignore mapping IO port bar(1) 00:05:11.638 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:11.898 EAL: Ignore mapping IO port bar(1) 00:05:11.898 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:12.159 EAL: Ignore mapping IO port bar(1) 00:05:12.159 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:12.159 EAL: Ignore mapping IO port bar(1) 00:05:12.419 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:12.419 EAL: Ignore mapping IO port bar(1) 00:05:12.680 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:12.680 EAL: Ignore mapping IO port bar(1) 00:05:12.680 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:12.940 EAL: Ignore mapping IO port bar(1) 00:05:12.940 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:13.200 EAL: Ignore mapping IO port bar(1) 00:05:13.200 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:13.200 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:13.200 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:13.461 Starting DPDK initialization... 00:05:13.461 Starting SPDK post initialization... 00:05:13.461 SPDK NVMe probe 00:05:13.461 Attaching to 0000:65:00.0 00:05:13.461 Attached to 0000:65:00.0 00:05:13.461 Cleaning up... 00:05:15.371 00:05:15.371 real 0m5.712s 00:05:15.371 user 0m0.172s 00:05:15.371 sys 0m0.086s 00:05:15.371 02:24:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.371 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.371 ************************************ 00:05:15.371 END TEST env_dpdk_post_init 00:05:15.371 ************************************ 00:05:15.371 02:24:48 -- env/env.sh@26 -- # uname 00:05:15.371 02:24:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.371 02:24:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.371 02:24:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.371 02:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.371 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.371 ************************************ 00:05:15.371 START TEST env_mem_callbacks 00:05:15.371 ************************************ 00:05:15.371 02:24:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.371 EAL: Detected CPU lcores: 128 00:05:15.371 EAL: Detected NUMA nodes: 2 00:05:15.371 EAL: Detected shared linkage of DPDK 00:05:15.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.371 EAL: Selected IOVA mode 'VA' 00:05:15.371 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.371 EAL: VFIO support initialized 00:05:15.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.371 00:05:15.371 00:05:15.371 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.371 http://cunit.sourceforge.net/ 00:05:15.371 00:05:15.371 00:05:15.371 Suite: memory 00:05:15.371 Test: test ... 00:05:15.371 register 0x200000200000 2097152 00:05:15.371 malloc 3145728 00:05:15.371 register 0x200000400000 4194304 00:05:15.371 buf 0x200000500000 len 3145728 PASSED 00:05:15.371 malloc 64 00:05:15.371 buf 0x2000004fff40 len 64 PASSED 00:05:15.371 malloc 4194304 00:05:15.371 register 0x200000800000 6291456 00:05:15.371 buf 0x200000a00000 len 4194304 PASSED 00:05:15.371 free 0x200000500000 3145728 00:05:15.371 free 0x2000004fff40 64 00:05:15.371 unregister 0x200000400000 4194304 PASSED 00:05:15.371 free 0x200000a00000 4194304 00:05:15.371 unregister 0x200000800000 6291456 PASSED 00:05:15.371 malloc 8388608 00:05:15.371 register 0x200000400000 10485760 00:05:15.371 buf 0x200000600000 len 8388608 PASSED 00:05:15.371 free 0x200000600000 8388608 00:05:15.371 unregister 0x200000400000 10485760 PASSED 00:05:15.371 passed 00:05:15.371 00:05:15.371 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.371 suites 1 1 n/a 0 0 00:05:15.371 tests 1 1 1 0 0 00:05:15.371 asserts 15 15 15 0 n/a 00:05:15.371 00:05:15.371 Elapsed time = 0.008 seconds 00:05:15.371 00:05:15.371 real 0m0.064s 00:05:15.371 user 0m0.023s 00:05:15.371 sys 0m0.041s 00:05:15.371 02:24:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.371 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.371 ************************************ 00:05:15.371 END TEST env_mem_callbacks 00:05:15.371 ************************************ 00:05:15.371 00:05:15.371 real 0m7.806s 00:05:15.371 user 0m1.222s 00:05:15.371 sys 0m1.041s 00:05:15.371 02:24:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.371 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.371 ************************************ 00:05:15.371 END TEST env 00:05:15.371 ************************************ 00:05:15.371 02:24:48 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.371 02:24:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.371 02:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.371 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.632 ************************************ 00:05:15.632 START TEST rpc 00:05:15.632 ************************************ 00:05:15.632 02:24:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.632 * Looking for test storage... 00:05:15.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.632 02:24:49 -- rpc/rpc.sh@65 -- # spdk_pid=4104567 00:05:15.632 02:24:49 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.632 02:24:49 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.632 02:24:49 -- rpc/rpc.sh@67 -- # waitforlisten 4104567 00:05:15.632 02:24:49 -- common/autotest_common.sh@817 -- # '[' -z 4104567 ']' 00:05:15.632 02:24:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.632 02:24:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.632 02:24:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.632 02:24:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.632 02:24:49 -- common/autotest_common.sh@10 -- # set +x 00:05:15.632 [2024-04-27 02:24:49.169085] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:15.632 [2024-04-27 02:24:49.169136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104567 ] 00:05:15.632 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.632 [2024-04-27 02:24:49.230379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.894 [2024-04-27 02:24:49.298997] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.894 [2024-04-27 02:24:49.299031] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4104567' to capture a snapshot of events at runtime. 00:05:15.894 [2024-04-27 02:24:49.299039] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.894 [2024-04-27 02:24:49.299045] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.894 [2024-04-27 02:24:49.299051] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4104567 for offline analysis/debug. 00:05:15.894 [2024-04-27 02:24:49.299071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.466 02:24:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.466 02:24:49 -- common/autotest_common.sh@850 -- # return 0 00:05:16.466 02:24:49 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.466 02:24:49 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.466 02:24:49 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.466 02:24:49 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.466 02:24:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.466 02:24:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.466 02:24:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.466 ************************************ 00:05:16.466 START TEST rpc_integrity 00:05:16.466 ************************************ 00:05:16.466 02:24:50 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:16.466 02:24:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.466 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.466 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.466 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.466 02:24:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.466 02:24:50 -- rpc/rpc.sh@13 -- # jq length 00:05:16.727 02:24:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.727 02:24:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.727 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.727 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.727 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.727 02:24:50 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.727 02:24:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.727 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.727 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.727 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.727 02:24:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.727 { 00:05:16.727 "name": "Malloc0", 00:05:16.727 "aliases": [ 00:05:16.727 "38e9a45b-301a-4170-b091-238e10e8d5b1" 00:05:16.727 ], 00:05:16.727 "product_name": "Malloc disk", 00:05:16.727 "block_size": 512, 00:05:16.727 "num_blocks": 16384, 00:05:16.727 "uuid": "38e9a45b-301a-4170-b091-238e10e8d5b1", 00:05:16.727 "assigned_rate_limits": { 00:05:16.727 "rw_ios_per_sec": 0, 00:05:16.727 "rw_mbytes_per_sec": 0, 00:05:16.727 "r_mbytes_per_sec": 0, 00:05:16.727 "w_mbytes_per_sec": 0 00:05:16.727 }, 00:05:16.727 "claimed": false, 00:05:16.727 "zoned": false, 00:05:16.727 "supported_io_types": { 00:05:16.727 "read": true, 00:05:16.727 "write": true, 00:05:16.727 "unmap": true, 00:05:16.727 "write_zeroes": true, 00:05:16.727 "flush": true, 00:05:16.727 "reset": true, 00:05:16.727 "compare": false, 00:05:16.727 "compare_and_write": false, 00:05:16.727 "abort": true, 00:05:16.727 "nvme_admin": false, 00:05:16.727 "nvme_io": false 00:05:16.727 }, 00:05:16.727 "memory_domains": [ 00:05:16.727 { 00:05:16.727 "dma_device_id": "system", 00:05:16.727 "dma_device_type": 1 00:05:16.728 }, 00:05:16.728 { 00:05:16.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.728 "dma_device_type": 2 00:05:16.728 } 00:05:16.728 ], 00:05:16.728 "driver_specific": {} 00:05:16.728 } 00:05:16.728 ]' 00:05:16.728 02:24:50 -- rpc/rpc.sh@17 -- # jq length 00:05:16.728 02:24:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.728 02:24:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.728 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.728 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.728 [2024-04-27 02:24:50.198676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.728 [2024-04-27 02:24:50.198707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.728 [2024-04-27 02:24:50.198720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f67120 00:05:16.728 [2024-04-27 02:24:50.198727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.728 [2024-04-27 02:24:50.200076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.728 [2024-04-27 02:24:50.200097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.728 Passthru0 00:05:16.728 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.728 02:24:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.728 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.728 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.728 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.728 02:24:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.728 { 00:05:16.728 "name": "Malloc0", 00:05:16.728 "aliases": [ 00:05:16.728 "38e9a45b-301a-4170-b091-238e10e8d5b1" 00:05:16.728 ], 00:05:16.728 "product_name": "Malloc disk", 00:05:16.728 "block_size": 512, 00:05:16.728 "num_blocks": 16384, 00:05:16.728 "uuid": "38e9a45b-301a-4170-b091-238e10e8d5b1", 00:05:16.728 "assigned_rate_limits": { 00:05:16.728 "rw_ios_per_sec": 0, 00:05:16.728 "rw_mbytes_per_sec": 0, 00:05:16.728 "r_mbytes_per_sec": 0, 00:05:16.728 "w_mbytes_per_sec": 0 00:05:16.728 }, 00:05:16.728 "claimed": true, 00:05:16.728 "claim_type": "exclusive_write", 00:05:16.728 "zoned": false, 00:05:16.728 "supported_io_types": { 00:05:16.728 "read": true, 00:05:16.728 "write": true, 00:05:16.728 "unmap": true, 00:05:16.728 "write_zeroes": true, 00:05:16.728 "flush": true, 00:05:16.728 "reset": true, 00:05:16.728 "compare": false, 00:05:16.728 "compare_and_write": false, 00:05:16.728 "abort": true, 00:05:16.728 "nvme_admin": false, 00:05:16.728 "nvme_io": false 00:05:16.728 }, 00:05:16.728 "memory_domains": [ 00:05:16.728 { 00:05:16.728 "dma_device_id": "system", 00:05:16.728 "dma_device_type": 1 00:05:16.728 }, 00:05:16.728 { 00:05:16.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.728 "dma_device_type": 2 00:05:16.728 } 00:05:16.728 ], 00:05:16.728 "driver_specific": {} 00:05:16.728 }, 00:05:16.728 { 00:05:16.728 "name": "Passthru0", 00:05:16.728 "aliases": [ 00:05:16.728 "b057bcad-6bd2-5ff9-be0e-8d5b89bde014" 00:05:16.728 ], 00:05:16.728 "product_name": "passthru", 00:05:16.728 "block_size": 512, 00:05:16.728 "num_blocks": 16384, 00:05:16.728 "uuid": "b057bcad-6bd2-5ff9-be0e-8d5b89bde014", 00:05:16.728 "assigned_rate_limits": { 00:05:16.728 "rw_ios_per_sec": 0, 00:05:16.728 "rw_mbytes_per_sec": 0, 00:05:16.728 "r_mbytes_per_sec": 0, 00:05:16.728 "w_mbytes_per_sec": 0 00:05:16.728 }, 00:05:16.728 "claimed": false, 00:05:16.728 "zoned": false, 00:05:16.728 "supported_io_types": { 00:05:16.728 "read": true, 00:05:16.728 "write": true, 00:05:16.728 "unmap": true, 00:05:16.728 "write_zeroes": true, 00:05:16.728 "flush": true, 00:05:16.728 "reset": true, 00:05:16.728 "compare": false, 00:05:16.728 "compare_and_write": false, 00:05:16.728 "abort": true, 00:05:16.728 "nvme_admin": false, 00:05:16.728 "nvme_io": false 00:05:16.728 }, 00:05:16.728 "memory_domains": [ 00:05:16.728 { 00:05:16.728 "dma_device_id": "system", 00:05:16.728 "dma_device_type": 1 00:05:16.728 }, 00:05:16.728 { 00:05:16.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.728 "dma_device_type": 2 00:05:16.728 } 00:05:16.728 ], 00:05:16.728 "driver_specific": { 00:05:16.728 "passthru": { 00:05:16.728 "name": "Passthru0", 00:05:16.728 "base_bdev_name": "Malloc0" 00:05:16.728 } 00:05:16.728 } 00:05:16.728 } 00:05:16.728 ]' 00:05:16.728 02:24:50 -- rpc/rpc.sh@21 -- # jq length 00:05:16.728 02:24:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.728 02:24:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.728 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.728 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.728 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.728 02:24:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.728 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.728 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.728 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.728 02:24:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.728 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.728 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.728 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.728 02:24:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.728 02:24:50 -- rpc/rpc.sh@26 -- # jq length 00:05:16.989 02:24:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.989 00:05:16.989 real 0m0.289s 00:05:16.989 user 0m0.182s 00:05:16.989 sys 0m0.045s 00:05:16.989 02:24:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.989 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 ************************************ 00:05:16.989 END TEST rpc_integrity 00:05:16.989 ************************************ 00:05:16.989 02:24:50 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.989 02:24:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.989 02:24:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.989 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 ************************************ 00:05:16.989 START TEST rpc_plugins 00:05:16.989 ************************************ 00:05:16.989 02:24:50 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:16.989 02:24:50 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.989 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.989 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.989 02:24:50 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.989 02:24:50 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.990 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.990 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.990 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.990 02:24:50 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.990 { 00:05:16.990 "name": "Malloc1", 00:05:16.990 "aliases": [ 00:05:16.990 "33cf4a78-f4d1-4833-8bac-f6aacf137db9" 00:05:16.990 ], 00:05:16.990 "product_name": "Malloc disk", 00:05:16.990 "block_size": 4096, 00:05:16.990 "num_blocks": 256, 00:05:16.990 "uuid": "33cf4a78-f4d1-4833-8bac-f6aacf137db9", 00:05:16.990 "assigned_rate_limits": { 00:05:16.990 "rw_ios_per_sec": 0, 00:05:16.990 "rw_mbytes_per_sec": 0, 00:05:16.990 "r_mbytes_per_sec": 0, 00:05:16.990 "w_mbytes_per_sec": 0 00:05:16.990 }, 00:05:16.990 "claimed": false, 00:05:16.990 "zoned": false, 00:05:16.990 "supported_io_types": { 00:05:16.990 "read": true, 00:05:16.990 "write": true, 00:05:16.990 "unmap": true, 00:05:16.990 "write_zeroes": true, 00:05:16.990 "flush": true, 00:05:16.990 "reset": true, 00:05:16.990 "compare": false, 00:05:16.990 "compare_and_write": false, 00:05:16.990 "abort": true, 00:05:16.990 "nvme_admin": false, 00:05:16.990 "nvme_io": false 00:05:16.990 }, 00:05:16.990 "memory_domains": [ 00:05:16.990 { 00:05:16.990 "dma_device_id": "system", 00:05:16.990 "dma_device_type": 1 00:05:16.990 }, 00:05:16.990 { 00:05:16.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.990 "dma_device_type": 2 00:05:16.990 } 00:05:16.990 ], 00:05:16.990 "driver_specific": {} 00:05:16.990 } 00:05:16.990 ]' 00:05:16.990 02:24:50 -- rpc/rpc.sh@32 -- # jq length 00:05:16.990 02:24:50 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.990 02:24:50 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.990 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.990 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.251 02:24:50 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.251 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.251 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.251 02:24:50 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.251 02:24:50 -- rpc/rpc.sh@36 -- # jq length 00:05:17.251 02:24:50 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.251 00:05:17.251 real 0m0.152s 00:05:17.251 user 0m0.092s 00:05:17.251 sys 0m0.021s 00:05:17.251 02:24:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.251 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 ************************************ 00:05:17.251 END TEST rpc_plugins 00:05:17.251 ************************************ 00:05:17.251 02:24:50 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.251 02:24:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.251 02:24:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.251 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 ************************************ 00:05:17.251 START TEST rpc_trace_cmd_test 00:05:17.251 ************************************ 00:05:17.251 02:24:50 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:17.251 02:24:50 -- rpc/rpc.sh@40 -- # local info 00:05:17.251 02:24:50 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.251 02:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.251 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 02:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.251 02:24:50 -- rpc/rpc.sh@42 -- # info='{ 00:05:17.251 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4104567", 00:05:17.251 "tpoint_group_mask": "0x8", 00:05:17.251 "iscsi_conn": { 00:05:17.251 "mask": "0x2", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "scsi": { 00:05:17.251 "mask": "0x4", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "bdev": { 00:05:17.251 "mask": "0x8", 00:05:17.251 "tpoint_mask": "0xffffffffffffffff" 00:05:17.251 }, 00:05:17.251 "nvmf_rdma": { 00:05:17.251 "mask": "0x10", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "nvmf_tcp": { 00:05:17.251 "mask": "0x20", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "ftl": { 00:05:17.251 "mask": "0x40", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "blobfs": { 00:05:17.251 "mask": "0x80", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "dsa": { 00:05:17.251 "mask": "0x200", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "thread": { 00:05:17.251 "mask": "0x400", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "nvme_pcie": { 00:05:17.251 "mask": "0x800", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "iaa": { 00:05:17.251 "mask": "0x1000", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "nvme_tcp": { 00:05:17.251 "mask": "0x2000", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "bdev_nvme": { 00:05:17.251 "mask": "0x4000", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 }, 00:05:17.251 "sock": { 00:05:17.251 "mask": "0x8000", 00:05:17.251 "tpoint_mask": "0x0" 00:05:17.251 } 00:05:17.251 }' 00:05:17.251 02:24:50 -- rpc/rpc.sh@43 -- # jq length 00:05:17.513 02:24:50 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:17.513 02:24:50 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.513 02:24:50 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.513 02:24:50 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.513 02:24:51 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.513 02:24:51 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.513 02:24:51 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.513 02:24:51 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.513 02:24:51 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.513 00:05:17.513 real 0m0.243s 00:05:17.513 user 0m0.204s 00:05:17.513 sys 0m0.029s 00:05:17.513 02:24:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.513 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.513 ************************************ 00:05:17.513 END TEST rpc_trace_cmd_test 00:05:17.513 ************************************ 00:05:17.513 02:24:51 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.513 02:24:51 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.513 02:24:51 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.514 02:24:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.514 02:24:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.514 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 ************************************ 00:05:17.775 START TEST rpc_daemon_integrity 00:05:17.775 ************************************ 00:05:17.775 02:24:51 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:17.775 02:24:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.775 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.775 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.775 02:24:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.775 02:24:51 -- rpc/rpc.sh@13 -- # jq length 00:05:17.775 02:24:51 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.775 02:24:51 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.775 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.775 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.775 02:24:51 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.775 02:24:51 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.775 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.775 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.775 02:24:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.775 { 00:05:17.775 "name": "Malloc2", 00:05:17.775 "aliases": [ 00:05:17.775 "6752bee9-e1a3-450e-bec2-0338367889e5" 00:05:17.775 ], 00:05:17.775 "product_name": "Malloc disk", 00:05:17.775 "block_size": 512, 00:05:17.775 "num_blocks": 16384, 00:05:17.775 "uuid": "6752bee9-e1a3-450e-bec2-0338367889e5", 00:05:17.775 "assigned_rate_limits": { 00:05:17.775 "rw_ios_per_sec": 0, 00:05:17.775 "rw_mbytes_per_sec": 0, 00:05:17.775 "r_mbytes_per_sec": 0, 00:05:17.775 "w_mbytes_per_sec": 0 00:05:17.775 }, 00:05:17.775 "claimed": false, 00:05:17.775 "zoned": false, 00:05:17.775 "supported_io_types": { 00:05:17.775 "read": true, 00:05:17.775 "write": true, 00:05:17.775 "unmap": true, 00:05:17.775 "write_zeroes": true, 00:05:17.775 "flush": true, 00:05:17.775 "reset": true, 00:05:17.775 "compare": false, 00:05:17.775 "compare_and_write": false, 00:05:17.775 "abort": true, 00:05:17.775 "nvme_admin": false, 00:05:17.775 "nvme_io": false 00:05:17.775 }, 00:05:17.775 "memory_domains": [ 00:05:17.775 { 00:05:17.775 "dma_device_id": "system", 00:05:17.775 "dma_device_type": 1 00:05:17.775 }, 00:05:17.775 { 00:05:17.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.775 "dma_device_type": 2 00:05:17.775 } 00:05:17.775 ], 00:05:17.775 "driver_specific": {} 00:05:17.775 } 00:05:17.775 ]' 00:05:17.775 02:24:51 -- rpc/rpc.sh@17 -- # jq length 00:05:18.038 02:24:51 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.038 02:24:51 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.038 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.038 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.038 [2024-04-27 02:24:51.409977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.038 [2024-04-27 02:24:51.410007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.038 [2024-04-27 02:24:51.410020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f67b50 00:05:18.038 [2024-04-27 02:24:51.410032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.038 [2024-04-27 02:24:51.411255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.038 [2024-04-27 02:24:51.411275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.038 Passthru0 00:05:18.038 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.038 02:24:51 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.038 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.038 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.038 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.038 02:24:51 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.038 { 00:05:18.038 "name": "Malloc2", 00:05:18.038 "aliases": [ 00:05:18.038 "6752bee9-e1a3-450e-bec2-0338367889e5" 00:05:18.038 ], 00:05:18.038 "product_name": "Malloc disk", 00:05:18.038 "block_size": 512, 00:05:18.038 "num_blocks": 16384, 00:05:18.038 "uuid": "6752bee9-e1a3-450e-bec2-0338367889e5", 00:05:18.038 "assigned_rate_limits": { 00:05:18.038 "rw_ios_per_sec": 0, 00:05:18.038 "rw_mbytes_per_sec": 0, 00:05:18.038 "r_mbytes_per_sec": 0, 00:05:18.038 "w_mbytes_per_sec": 0 00:05:18.038 }, 00:05:18.038 "claimed": true, 00:05:18.038 "claim_type": "exclusive_write", 00:05:18.038 "zoned": false, 00:05:18.038 "supported_io_types": { 00:05:18.038 "read": true, 00:05:18.038 "write": true, 00:05:18.038 "unmap": true, 00:05:18.038 "write_zeroes": true, 00:05:18.038 "flush": true, 00:05:18.038 "reset": true, 00:05:18.038 "compare": false, 00:05:18.038 "compare_and_write": false, 00:05:18.038 "abort": true, 00:05:18.038 "nvme_admin": false, 00:05:18.038 "nvme_io": false 00:05:18.038 }, 00:05:18.038 "memory_domains": [ 00:05:18.038 { 00:05:18.038 "dma_device_id": "system", 00:05:18.038 "dma_device_type": 1 00:05:18.038 }, 00:05:18.038 { 00:05:18.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.038 "dma_device_type": 2 00:05:18.038 } 00:05:18.038 ], 00:05:18.038 "driver_specific": {} 00:05:18.038 }, 00:05:18.038 { 00:05:18.038 "name": "Passthru0", 00:05:18.038 "aliases": [ 00:05:18.038 "02e46fab-452e-564f-8571-e4e76ff222a8" 00:05:18.038 ], 00:05:18.038 "product_name": "passthru", 00:05:18.038 "block_size": 512, 00:05:18.038 "num_blocks": 16384, 00:05:18.038 "uuid": "02e46fab-452e-564f-8571-e4e76ff222a8", 00:05:18.038 "assigned_rate_limits": { 00:05:18.038 "rw_ios_per_sec": 0, 00:05:18.038 "rw_mbytes_per_sec": 0, 00:05:18.038 "r_mbytes_per_sec": 0, 00:05:18.038 "w_mbytes_per_sec": 0 00:05:18.038 }, 00:05:18.038 "claimed": false, 00:05:18.038 "zoned": false, 00:05:18.038 "supported_io_types": { 00:05:18.038 "read": true, 00:05:18.038 "write": true, 00:05:18.038 "unmap": true, 00:05:18.038 "write_zeroes": true, 00:05:18.038 "flush": true, 00:05:18.038 "reset": true, 00:05:18.038 "compare": false, 00:05:18.038 "compare_and_write": false, 00:05:18.038 "abort": true, 00:05:18.038 "nvme_admin": false, 00:05:18.038 "nvme_io": false 00:05:18.038 }, 00:05:18.038 "memory_domains": [ 00:05:18.038 { 00:05:18.038 "dma_device_id": "system", 00:05:18.038 "dma_device_type": 1 00:05:18.038 }, 00:05:18.038 { 00:05:18.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.038 "dma_device_type": 2 00:05:18.038 } 00:05:18.038 ], 00:05:18.038 "driver_specific": { 00:05:18.038 "passthru": { 00:05:18.038 "name": "Passthru0", 00:05:18.038 "base_bdev_name": "Malloc2" 00:05:18.038 } 00:05:18.038 } 00:05:18.038 } 00:05:18.038 ]' 00:05:18.038 02:24:51 -- rpc/rpc.sh@21 -- # jq length 00:05:18.038 02:24:51 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.038 02:24:51 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.038 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.038 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.038 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.038 02:24:51 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.038 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.038 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.038 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.038 02:24:51 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.038 02:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:18.038 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.038 02:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:18.038 02:24:51 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.038 02:24:51 -- rpc/rpc.sh@26 -- # jq length 00:05:18.038 02:24:51 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.038 00:05:18.038 real 0m0.281s 00:05:18.038 user 0m0.185s 00:05:18.038 sys 0m0.036s 00:05:18.038 02:24:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.038 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.038 ************************************ 00:05:18.038 END TEST rpc_daemon_integrity 00:05:18.038 ************************************ 00:05:18.038 02:24:51 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.038 02:24:51 -- rpc/rpc.sh@84 -- # killprocess 4104567 00:05:18.039 02:24:51 -- common/autotest_common.sh@936 -- # '[' -z 4104567 ']' 00:05:18.039 02:24:51 -- common/autotest_common.sh@940 -- # kill -0 4104567 00:05:18.039 02:24:51 -- common/autotest_common.sh@941 -- # uname 00:05:18.039 02:24:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.039 02:24:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4104567 00:05:18.039 02:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.039 02:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.039 02:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4104567' 00:05:18.039 killing process with pid 4104567 00:05:18.039 02:24:51 -- common/autotest_common.sh@955 -- # kill 4104567 00:05:18.039 02:24:51 -- common/autotest_common.sh@960 -- # wait 4104567 00:05:18.300 00:05:18.300 real 0m2.841s 00:05:18.300 user 0m3.763s 00:05:18.300 sys 0m0.852s 00:05:18.300 02:24:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.300 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.300 ************************************ 00:05:18.300 END TEST rpc 00:05:18.300 ************************************ 00:05:18.300 02:24:51 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.300 02:24:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.300 02:24:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.300 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.561 ************************************ 00:05:18.561 START TEST skip_rpc 00:05:18.561 ************************************ 00:05:18.561 02:24:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.561 * Looking for test storage... 00:05:18.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.561 02:24:52 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.561 02:24:52 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.561 02:24:52 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.561 02:24:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.561 02:24:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.561 02:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:18.822 ************************************ 00:05:18.822 START TEST skip_rpc 00:05:18.822 ************************************ 00:05:18.822 02:24:52 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:18.822 02:24:52 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4105455 00:05:18.822 02:24:52 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.822 02:24:52 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.822 02:24:52 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.822 [2024-04-27 02:24:52.351995] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:18.823 [2024-04-27 02:24:52.352053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105455 ] 00:05:18.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.823 [2024-04-27 02:24:52.417250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.083 [2024-04-27 02:24:52.488788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:24.377 02:24:57 -- common/autotest_common.sh@638 -- # local es=0 00:05:24.377 02:24:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:24.377 02:24:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:24.377 02:24:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.377 02:24:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:24.377 02:24:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.377 02:24:57 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:24.377 02:24:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.377 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:24.377 02:24:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:24.377 02:24:57 -- common/autotest_common.sh@641 -- # es=1 00:05:24.377 02:24:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:24.377 02:24:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:24.377 02:24:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@23 -- # killprocess 4105455 00:05:24.377 02:24:57 -- common/autotest_common.sh@936 -- # '[' -z 4105455 ']' 00:05:24.377 02:24:57 -- common/autotest_common.sh@940 -- # kill -0 4105455 00:05:24.377 02:24:57 -- common/autotest_common.sh@941 -- # uname 00:05:24.377 02:24:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.377 02:24:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4105455 00:05:24.377 02:24:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.377 02:24:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.377 02:24:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4105455' 00:05:24.377 killing process with pid 4105455 00:05:24.377 02:24:57 -- common/autotest_common.sh@955 -- # kill 4105455 00:05:24.377 02:24:57 -- common/autotest_common.sh@960 -- # wait 4105455 00:05:24.377 00:05:24.377 real 0m5.278s 00:05:24.377 user 0m5.089s 00:05:24.377 sys 0m0.228s 00:05:24.377 02:24:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.377 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:24.377 ************************************ 00:05:24.377 END TEST skip_rpc 00:05:24.377 ************************************ 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:24.377 02:24:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.377 02:24:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.377 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:24.377 ************************************ 00:05:24.377 START TEST skip_rpc_with_json 00:05:24.377 ************************************ 00:05:24.377 02:24:57 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4106567 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@31 -- # waitforlisten 4106567 00:05:24.377 02:24:57 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.377 02:24:57 -- common/autotest_common.sh@817 -- # '[' -z 4106567 ']' 00:05:24.377 02:24:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.377 02:24:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.377 02:24:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.377 02:24:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.377 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:24.377 [2024-04-27 02:24:57.805294] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:24.377 [2024-04-27 02:24:57.805345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106567 ] 00:05:24.377 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.377 [2024-04-27 02:24:57.866767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.377 [2024-04-27 02:24:57.937294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.983 02:24:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.983 02:24:58 -- common/autotest_common.sh@850 -- # return 0 00:05:24.983 02:24:58 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.983 02:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.983 02:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 [2024-04-27 02:24:58.568105] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.983 request: 00:05:24.983 { 00:05:24.983 "trtype": "tcp", 00:05:24.983 "method": "nvmf_get_transports", 00:05:24.983 "req_id": 1 00:05:24.983 } 00:05:24.983 Got JSON-RPC error response 00:05:24.983 response: 00:05:24.983 { 00:05:24.983 "code": -19, 00:05:24.983 "message": "No such device" 00:05:24.983 } 00:05:24.983 02:24:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:24.983 02:24:58 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.983 02:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.983 02:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.983 [2024-04-27 02:24:58.576208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.983 02:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.983 02:24:58 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.983 02:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.983 02:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:25.244 02:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:25.244 02:24:58 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.244 { 00:05:25.244 "subsystems": [ 00:05:25.244 { 00:05:25.244 "subsystem": "vfio_user_target", 00:05:25.244 "config": null 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "keyring", 00:05:25.244 "config": [] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "iobuf", 00:05:25.244 "config": [ 00:05:25.244 { 00:05:25.244 "method": "iobuf_set_options", 00:05:25.244 "params": { 00:05:25.244 "small_pool_count": 8192, 00:05:25.244 "large_pool_count": 1024, 00:05:25.244 "small_bufsize": 8192, 00:05:25.244 "large_bufsize": 135168 00:05:25.244 } 00:05:25.244 } 00:05:25.244 ] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "sock", 00:05:25.244 "config": [ 00:05:25.244 { 00:05:25.244 "method": "sock_impl_set_options", 00:05:25.244 "params": { 00:05:25.244 "impl_name": "posix", 00:05:25.244 "recv_buf_size": 2097152, 00:05:25.244 "send_buf_size": 2097152, 00:05:25.244 "enable_recv_pipe": true, 00:05:25.244 "enable_quickack": false, 00:05:25.244 "enable_placement_id": 0, 00:05:25.244 "enable_zerocopy_send_server": true, 00:05:25.244 "enable_zerocopy_send_client": false, 00:05:25.244 "zerocopy_threshold": 0, 00:05:25.244 "tls_version": 0, 00:05:25.244 "enable_ktls": false 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "sock_impl_set_options", 00:05:25.244 "params": { 00:05:25.244 "impl_name": "ssl", 00:05:25.244 "recv_buf_size": 4096, 00:05:25.244 "send_buf_size": 4096, 00:05:25.244 "enable_recv_pipe": true, 00:05:25.244 "enable_quickack": false, 00:05:25.244 "enable_placement_id": 0, 00:05:25.244 "enable_zerocopy_send_server": true, 00:05:25.244 "enable_zerocopy_send_client": false, 00:05:25.244 "zerocopy_threshold": 0, 00:05:25.244 "tls_version": 0, 00:05:25.244 "enable_ktls": false 00:05:25.244 } 00:05:25.244 } 00:05:25.244 ] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "vmd", 00:05:25.244 "config": [] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "accel", 00:05:25.244 "config": [ 00:05:25.244 { 00:05:25.244 "method": "accel_set_options", 00:05:25.244 "params": { 00:05:25.244 "small_cache_size": 128, 00:05:25.244 "large_cache_size": 16, 00:05:25.244 "task_count": 2048, 00:05:25.244 "sequence_count": 2048, 00:05:25.244 "buf_count": 2048 00:05:25.244 } 00:05:25.244 } 00:05:25.244 ] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "bdev", 00:05:25.244 "config": [ 00:05:25.244 { 00:05:25.244 "method": "bdev_set_options", 00:05:25.244 "params": { 00:05:25.244 "bdev_io_pool_size": 65535, 00:05:25.244 "bdev_io_cache_size": 256, 00:05:25.244 "bdev_auto_examine": true, 00:05:25.244 "iobuf_small_cache_size": 128, 00:05:25.244 "iobuf_large_cache_size": 16 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "bdev_raid_set_options", 00:05:25.244 "params": { 00:05:25.244 "process_window_size_kb": 1024 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "bdev_iscsi_set_options", 00:05:25.244 "params": { 00:05:25.244 "timeout_sec": 30 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "bdev_nvme_set_options", 00:05:25.244 "params": { 00:05:25.244 "action_on_timeout": "none", 00:05:25.244 "timeout_us": 0, 00:05:25.244 "timeout_admin_us": 0, 00:05:25.244 "keep_alive_timeout_ms": 10000, 00:05:25.244 "arbitration_burst": 0, 00:05:25.244 "low_priority_weight": 0, 00:05:25.244 "medium_priority_weight": 0, 00:05:25.244 "high_priority_weight": 0, 00:05:25.244 "nvme_adminq_poll_period_us": 10000, 00:05:25.244 "nvme_ioq_poll_period_us": 0, 00:05:25.244 "io_queue_requests": 0, 00:05:25.244 "delay_cmd_submit": true, 00:05:25.244 "transport_retry_count": 4, 00:05:25.244 "bdev_retry_count": 3, 00:05:25.244 "transport_ack_timeout": 0, 00:05:25.244 "ctrlr_loss_timeout_sec": 0, 00:05:25.244 "reconnect_delay_sec": 0, 00:05:25.244 "fast_io_fail_timeout_sec": 0, 00:05:25.244 "disable_auto_failback": false, 00:05:25.244 "generate_uuids": false, 00:05:25.244 "transport_tos": 0, 00:05:25.244 "nvme_error_stat": false, 00:05:25.244 "rdma_srq_size": 0, 00:05:25.244 "io_path_stat": false, 00:05:25.244 "allow_accel_sequence": false, 00:05:25.244 "rdma_max_cq_size": 0, 00:05:25.244 "rdma_cm_event_timeout_ms": 0, 00:05:25.244 "dhchap_digests": [ 00:05:25.244 "sha256", 00:05:25.244 "sha384", 00:05:25.244 "sha512" 00:05:25.244 ], 00:05:25.244 "dhchap_dhgroups": [ 00:05:25.244 "null", 00:05:25.244 "ffdhe2048", 00:05:25.244 "ffdhe3072", 00:05:25.244 "ffdhe4096", 00:05:25.244 "ffdhe6144", 00:05:25.244 "ffdhe8192" 00:05:25.244 ] 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "bdev_nvme_set_hotplug", 00:05:25.244 "params": { 00:05:25.244 "period_us": 100000, 00:05:25.244 "enable": false 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "bdev_wait_for_examine" 00:05:25.244 } 00:05:25.244 ] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "scsi", 00:05:25.244 "config": null 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "scheduler", 00:05:25.244 "config": [ 00:05:25.244 { 00:05:25.244 "method": "framework_set_scheduler", 00:05:25.244 "params": { 00:05:25.244 "name": "static" 00:05:25.244 } 00:05:25.244 } 00:05:25.244 ] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "vhost_scsi", 00:05:25.244 "config": [] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "vhost_blk", 00:05:25.244 "config": [] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "ublk", 00:05:25.244 "config": [] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "nbd", 00:05:25.244 "config": [] 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "subsystem": "nvmf", 00:05:25.244 "config": [ 00:05:25.244 { 00:05:25.244 "method": "nvmf_set_config", 00:05:25.244 "params": { 00:05:25.244 "discovery_filter": "match_any", 00:05:25.244 "admin_cmd_passthru": { 00:05:25.244 "identify_ctrlr": false 00:05:25.244 } 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "nvmf_set_max_subsystems", 00:05:25.244 "params": { 00:05:25.244 "max_subsystems": 1024 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "nvmf_set_crdt", 00:05:25.244 "params": { 00:05:25.244 "crdt1": 0, 00:05:25.244 "crdt2": 0, 00:05:25.244 "crdt3": 0 00:05:25.244 } 00:05:25.244 }, 00:05:25.244 { 00:05:25.244 "method": "nvmf_create_transport", 00:05:25.244 "params": { 00:05:25.244 "trtype": "TCP", 00:05:25.244 "max_queue_depth": 128, 00:05:25.244 "max_io_qpairs_per_ctrlr": 127, 00:05:25.244 "in_capsule_data_size": 4096, 00:05:25.244 "max_io_size": 131072, 00:05:25.245 "io_unit_size": 131072, 00:05:25.245 "max_aq_depth": 128, 00:05:25.245 "num_shared_buffers": 511, 00:05:25.245 "buf_cache_size": 4294967295, 00:05:25.245 "dif_insert_or_strip": false, 00:05:25.245 "zcopy": false, 00:05:25.245 "c2h_success": true, 00:05:25.245 "sock_priority": 0, 00:05:25.245 "abort_timeout_sec": 1, 00:05:25.245 "ack_timeout": 0, 00:05:25.245 "data_wr_pool_size": 0 00:05:25.245 } 00:05:25.245 } 00:05:25.245 ] 00:05:25.245 }, 00:05:25.245 { 00:05:25.245 "subsystem": "iscsi", 00:05:25.245 "config": [ 00:05:25.245 { 00:05:25.245 "method": "iscsi_set_options", 00:05:25.245 "params": { 00:05:25.245 "node_base": "iqn.2016-06.io.spdk", 00:05:25.245 "max_sessions": 128, 00:05:25.245 "max_connections_per_session": 2, 00:05:25.245 "max_queue_depth": 64, 00:05:25.245 "default_time2wait": 2, 00:05:25.245 "default_time2retain": 20, 00:05:25.245 "first_burst_length": 8192, 00:05:25.245 "immediate_data": true, 00:05:25.245 "allow_duplicated_isid": false, 00:05:25.245 "error_recovery_level": 0, 00:05:25.245 "nop_timeout": 60, 00:05:25.245 "nop_in_interval": 30, 00:05:25.245 "disable_chap": false, 00:05:25.245 "require_chap": false, 00:05:25.245 "mutual_chap": false, 00:05:25.245 "chap_group": 0, 00:05:25.245 "max_large_datain_per_connection": 64, 00:05:25.245 "max_r2t_per_connection": 4, 00:05:25.245 "pdu_pool_size": 36864, 00:05:25.245 "immediate_data_pool_size": 16384, 00:05:25.245 "data_out_pool_size": 2048 00:05:25.245 } 00:05:25.245 } 00:05:25.245 ] 00:05:25.245 } 00:05:25.245 ] 00:05:25.245 } 00:05:25.245 02:24:58 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:25.245 02:24:58 -- rpc/skip_rpc.sh@40 -- # killprocess 4106567 00:05:25.245 02:24:58 -- common/autotest_common.sh@936 -- # '[' -z 4106567 ']' 00:05:25.245 02:24:58 -- common/autotest_common.sh@940 -- # kill -0 4106567 00:05:25.245 02:24:58 -- common/autotest_common.sh@941 -- # uname 00:05:25.245 02:24:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.245 02:24:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4106567 00:05:25.245 02:24:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.245 02:24:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.245 02:24:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4106567' 00:05:25.245 killing process with pid 4106567 00:05:25.245 02:24:58 -- common/autotest_common.sh@955 -- # kill 4106567 00:05:25.245 02:24:58 -- common/autotest_common.sh@960 -- # wait 4106567 00:05:25.505 02:24:59 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4106846 00:05:25.505 02:24:59 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:25.505 02:24:59 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.794 02:25:04 -- rpc/skip_rpc.sh@50 -- # killprocess 4106846 00:05:30.794 02:25:04 -- common/autotest_common.sh@936 -- # '[' -z 4106846 ']' 00:05:30.794 02:25:04 -- common/autotest_common.sh@940 -- # kill -0 4106846 00:05:30.794 02:25:04 -- common/autotest_common.sh@941 -- # uname 00:05:30.794 02:25:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.794 02:25:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4106846 00:05:30.794 02:25:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.794 02:25:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.794 02:25:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4106846' 00:05:30.794 killing process with pid 4106846 00:05:30.794 02:25:04 -- common/autotest_common.sh@955 -- # kill 4106846 00:05:30.794 02:25:04 -- common/autotest_common.sh@960 -- # wait 4106846 00:05:30.794 02:25:04 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.794 02:25:04 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.794 00:05:30.794 real 0m6.531s 00:05:30.794 user 0m6.402s 00:05:30.794 sys 0m0.524s 00:05:30.794 02:25:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.794 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.794 ************************************ 00:05:30.794 END TEST skip_rpc_with_json 00:05:30.794 ************************************ 00:05:30.794 02:25:04 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.794 02:25:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.794 02:25:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.794 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 ************************************ 00:05:31.056 START TEST skip_rpc_with_delay 00:05:31.056 ************************************ 00:05:31.056 02:25:04 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:31.056 02:25:04 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.056 02:25:04 -- common/autotest_common.sh@638 -- # local es=0 00:05:31.056 02:25:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.056 02:25:04 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.056 02:25:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.056 02:25:04 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.056 02:25:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.056 02:25:04 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.056 02:25:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.056 02:25:04 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.056 02:25:04 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.056 02:25:04 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.056 [2024-04-27 02:25:04.537951] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.056 [2024-04-27 02:25:04.538054] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:31.056 02:25:04 -- common/autotest_common.sh@641 -- # es=1 00:05:31.056 02:25:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:31.056 02:25:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:31.056 02:25:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:31.056 00:05:31.056 real 0m0.087s 00:05:31.056 user 0m0.055s 00:05:31.056 sys 0m0.031s 00:05:31.056 02:25:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.056 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 ************************************ 00:05:31.056 END TEST skip_rpc_with_delay 00:05:31.056 ************************************ 00:05:31.056 02:25:04 -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.056 02:25:04 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.056 02:25:04 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.056 02:25:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.056 02:25:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.056 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:31.317 ************************************ 00:05:31.317 START TEST exit_on_failed_rpc_init 00:05:31.317 ************************************ 00:05:31.317 02:25:04 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:31.317 02:25:04 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4108096 00:05:31.317 02:25:04 -- rpc/skip_rpc.sh@63 -- # waitforlisten 4108096 00:05:31.318 02:25:04 -- common/autotest_common.sh@817 -- # '[' -z 4108096 ']' 00:05:31.318 02:25:04 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.318 02:25:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.318 02:25:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.318 02:25:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.318 02:25:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.318 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:31.318 [2024-04-27 02:25:04.817407] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:31.318 [2024-04-27 02:25:04.817461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108096 ] 00:05:31.318 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.318 [2024-04-27 02:25:04.877759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.579 [2024-04-27 02:25:04.942894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.151 02:25:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.151 02:25:05 -- common/autotest_common.sh@850 -- # return 0 00:05:32.151 02:25:05 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.151 02:25:05 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.151 02:25:05 -- common/autotest_common.sh@638 -- # local es=0 00:05:32.151 02:25:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.151 02:25:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.151 02:25:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.151 02:25:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.151 02:25:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.151 02:25:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.151 02:25:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.151 02:25:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.151 02:25:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.151 02:25:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.151 [2024-04-27 02:25:05.621564] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:32.151 [2024-04-27 02:25:05.621614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108255 ] 00:05:32.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.151 [2024-04-27 02:25:05.678945] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.151 [2024-04-27 02:25:05.741453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.151 [2024-04-27 02:25:05.741515] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.151 [2024-04-27 02:25:05.741525] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.151 [2024-04-27 02:25:05.741532] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.411 02:25:05 -- common/autotest_common.sh@641 -- # es=234 00:05:32.411 02:25:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:32.411 02:25:05 -- common/autotest_common.sh@650 -- # es=106 00:05:32.411 02:25:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:32.411 02:25:05 -- common/autotest_common.sh@658 -- # es=1 00:05:32.411 02:25:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:32.411 02:25:05 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.411 02:25:05 -- rpc/skip_rpc.sh@70 -- # killprocess 4108096 00:05:32.411 02:25:05 -- common/autotest_common.sh@936 -- # '[' -z 4108096 ']' 00:05:32.411 02:25:05 -- common/autotest_common.sh@940 -- # kill -0 4108096 00:05:32.411 02:25:05 -- common/autotest_common.sh@941 -- # uname 00:05:32.411 02:25:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.411 02:25:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4108096 00:05:32.411 02:25:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.411 02:25:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.411 02:25:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4108096' 00:05:32.411 killing process with pid 4108096 00:05:32.411 02:25:05 -- common/autotest_common.sh@955 -- # kill 4108096 00:05:32.411 02:25:05 -- common/autotest_common.sh@960 -- # wait 4108096 00:05:32.672 00:05:32.672 real 0m1.309s 00:05:32.672 user 0m1.534s 00:05:32.672 sys 0m0.349s 00:05:32.672 02:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.672 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.672 ************************************ 00:05:32.672 END TEST exit_on_failed_rpc_init 00:05:32.672 ************************************ 00:05:32.672 02:25:06 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.672 00:05:32.672 real 0m14.063s 00:05:32.672 user 0m13.395s 00:05:32.672 sys 0m1.626s 00:05:32.672 02:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.672 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.672 ************************************ 00:05:32.672 END TEST skip_rpc 00:05:32.672 ************************************ 00:05:32.672 02:25:06 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.672 02:25:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.672 02:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.672 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.672 ************************************ 00:05:32.672 START TEST rpc_client 00:05:32.672 ************************************ 00:05:32.672 02:25:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.932 * Looking for test storage... 00:05:32.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:32.932 02:25:06 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:32.932 OK 00:05:32.932 02:25:06 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:32.932 00:05:32.932 real 0m0.134s 00:05:32.932 user 0m0.063s 00:05:32.932 sys 0m0.080s 00:05:32.932 02:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.932 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.932 ************************************ 00:05:32.932 END TEST rpc_client 00:05:32.932 ************************************ 00:05:32.932 02:25:06 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.932 02:25:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.932 02:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.932 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.193 ************************************ 00:05:33.193 START TEST json_config 00:05:33.193 ************************************ 00:05:33.193 02:25:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.193 02:25:06 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.193 02:25:06 -- nvmf/common.sh@7 -- # uname -s 00:05:33.193 02:25:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.193 02:25:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.193 02:25:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.193 02:25:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.193 02:25:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.193 02:25:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.193 02:25:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.193 02:25:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.193 02:25:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.193 02:25:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.193 02:25:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.193 02:25:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.193 02:25:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.193 02:25:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.193 02:25:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.193 02:25:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.193 02:25:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.193 02:25:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.193 02:25:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.193 02:25:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.193 02:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.193 02:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.193 02:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.193 02:25:06 -- paths/export.sh@5 -- # export PATH 00:05:33.193 02:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.193 02:25:06 -- nvmf/common.sh@47 -- # : 0 00:05:33.193 02:25:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.193 02:25:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.193 02:25:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.193 02:25:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.193 02:25:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.193 02:25:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.193 02:25:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.193 02:25:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.194 02:25:06 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:33.194 02:25:06 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.194 02:25:06 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.194 02:25:06 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.194 02:25:06 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.194 02:25:06 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:33.194 02:25:06 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:33.194 02:25:06 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:33.194 02:25:06 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:33.194 02:25:06 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:33.194 02:25:06 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:33.194 02:25:06 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:33.194 02:25:06 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:33.194 02:25:06 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:33.194 02:25:06 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.194 02:25:06 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:33.194 INFO: JSON configuration test init 00:05:33.194 02:25:06 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:33.194 02:25:06 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:33.194 02:25:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:33.194 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.194 02:25:06 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:33.194 02:25:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:33.194 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.194 02:25:06 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:33.194 02:25:06 -- json_config/common.sh@9 -- # local app=target 00:05:33.194 02:25:06 -- json_config/common.sh@10 -- # shift 00:05:33.194 02:25:06 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.194 02:25:06 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.194 02:25:06 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.194 02:25:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.194 02:25:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.194 02:25:06 -- json_config/common.sh@22 -- # app_pid["$app"]=4108710 00:05:33.194 02:25:06 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.194 Waiting for target to run... 00:05:33.194 02:25:06 -- json_config/common.sh@25 -- # waitforlisten 4108710 /var/tmp/spdk_tgt.sock 00:05:33.194 02:25:06 -- common/autotest_common.sh@817 -- # '[' -z 4108710 ']' 00:05:33.194 02:25:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.194 02:25:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:33.194 02:25:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.194 02:25:06 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:33.194 02:25:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.194 02:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.194 [2024-04-27 02:25:06.797583] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:33.194 [2024-04-27 02:25:06.797637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108710 ] 00:05:33.454 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.454 [2024-04-27 02:25:07.016790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.454 [2024-04-27 02:25:07.065841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.025 02:25:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.025 02:25:07 -- common/autotest_common.sh@850 -- # return 0 00:05:34.025 02:25:07 -- json_config/common.sh@26 -- # echo '' 00:05:34.025 00:05:34.025 02:25:07 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:34.025 02:25:07 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:34.025 02:25:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:34.025 02:25:07 -- common/autotest_common.sh@10 -- # set +x 00:05:34.025 02:25:07 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:34.025 02:25:07 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:34.025 02:25:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:34.025 02:25:07 -- common/autotest_common.sh@10 -- # set +x 00:05:34.025 02:25:07 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:34.025 02:25:07 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:34.025 02:25:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:34.597 02:25:08 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:34.597 02:25:08 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:34.597 02:25:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:34.597 02:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.597 02:25:08 -- json_config/json_config.sh@45 -- # local ret=0 00:05:34.597 02:25:08 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:34.597 02:25:08 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:34.597 02:25:08 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:34.597 02:25:08 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:34.597 02:25:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:34.859 02:25:08 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:34.859 02:25:08 -- json_config/json_config.sh@48 -- # local get_types 00:05:34.859 02:25:08 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:34.859 02:25:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:34.859 02:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 02:25:08 -- json_config/json_config.sh@55 -- # return 0 00:05:34.859 02:25:08 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:34.859 02:25:08 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:34.859 02:25:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:34.859 02:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 02:25:08 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.859 02:25:08 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:34.859 02:25:08 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.859 02:25:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.859 MallocForNvmf0 00:05:34.859 02:25:08 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.859 02:25:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.121 MallocForNvmf1 00:05:35.121 02:25:08 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.121 02:25:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.121 [2024-04-27 02:25:08.741256] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.383 02:25:08 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.383 02:25:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.383 02:25:08 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.383 02:25:08 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.644 02:25:09 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.644 02:25:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.644 02:25:09 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.644 02:25:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.905 [2024-04-27 02:25:09.351246] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.905 02:25:09 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:35.905 02:25:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:35.905 02:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.905 02:25:09 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:35.905 02:25:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:35.905 02:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.905 02:25:09 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:35.905 02:25:09 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.905 02:25:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.165 MallocBdevForConfigChangeCheck 00:05:36.165 02:25:09 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:36.165 02:25:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:36.165 02:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.165 02:25:09 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:36.165 02:25:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.427 02:25:09 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:36.427 INFO: shutting down applications... 00:05:36.427 02:25:09 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:36.427 02:25:09 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:36.427 02:25:09 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:36.427 02:25:09 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.998 Calling clear_iscsi_subsystem 00:05:36.998 Calling clear_nvmf_subsystem 00:05:36.998 Calling clear_nbd_subsystem 00:05:36.998 Calling clear_ublk_subsystem 00:05:36.998 Calling clear_vhost_blk_subsystem 00:05:36.998 Calling clear_vhost_scsi_subsystem 00:05:36.998 Calling clear_bdev_subsystem 00:05:36.998 02:25:10 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:36.998 02:25:10 -- json_config/json_config.sh@343 -- # count=100 00:05:36.998 02:25:10 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:36.998 02:25:10 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.998 02:25:10 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.998 02:25:10 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:37.259 02:25:10 -- json_config/json_config.sh@345 -- # break 00:05:37.259 02:25:10 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:37.259 02:25:10 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:37.259 02:25:10 -- json_config/common.sh@31 -- # local app=target 00:05:37.259 02:25:10 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.259 02:25:10 -- json_config/common.sh@35 -- # [[ -n 4108710 ]] 00:05:37.259 02:25:10 -- json_config/common.sh@38 -- # kill -SIGINT 4108710 00:05:37.259 02:25:10 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.259 02:25:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.259 02:25:10 -- json_config/common.sh@41 -- # kill -0 4108710 00:05:37.259 02:25:10 -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.832 02:25:11 -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.832 02:25:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.832 02:25:11 -- json_config/common.sh@41 -- # kill -0 4108710 00:05:37.832 02:25:11 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.832 02:25:11 -- json_config/common.sh@43 -- # break 00:05:37.832 02:25:11 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.832 02:25:11 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.832 SPDK target shutdown done 00:05:37.832 02:25:11 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:37.832 INFO: relaunching applications... 00:05:37.832 02:25:11 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.832 02:25:11 -- json_config/common.sh@9 -- # local app=target 00:05:37.832 02:25:11 -- json_config/common.sh@10 -- # shift 00:05:37.832 02:25:11 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.832 02:25:11 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.832 02:25:11 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.832 02:25:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.832 02:25:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.832 02:25:11 -- json_config/common.sh@22 -- # app_pid["$app"]=4109537 00:05:37.832 02:25:11 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.832 Waiting for target to run... 00:05:37.832 02:25:11 -- json_config/common.sh@25 -- # waitforlisten 4109537 /var/tmp/spdk_tgt.sock 00:05:37.832 02:25:11 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.832 02:25:11 -- common/autotest_common.sh@817 -- # '[' -z 4109537 ']' 00:05:37.832 02:25:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.832 02:25:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.832 02:25:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.832 02:25:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.832 02:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:37.832 [2024-04-27 02:25:11.220720] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:37.832 [2024-04-27 02:25:11.220786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109537 ] 00:05:37.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.093 [2024-04-27 02:25:11.532631] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.093 [2024-04-27 02:25:11.590235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.667 [2024-04-27 02:25:12.075248] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.667 [2024-04-27 02:25:12.107645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.667 02:25:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.667 02:25:12 -- common/autotest_common.sh@850 -- # return 0 00:05:38.667 02:25:12 -- json_config/common.sh@26 -- # echo '' 00:05:38.667 00:05:38.667 02:25:12 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:38.667 02:25:12 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.667 INFO: Checking if target configuration is the same... 00:05:38.667 02:25:12 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.667 02:25:12 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:38.667 02:25:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.667 + '[' 2 -ne 2 ']' 00:05:38.667 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.667 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.667 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.667 +++ basename /dev/fd/62 00:05:38.667 ++ mktemp /tmp/62.XXX 00:05:38.667 + tmp_file_1=/tmp/62.aRN 00:05:38.667 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.667 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.667 + tmp_file_2=/tmp/spdk_tgt_config.json.Zxp 00:05:38.667 + ret=0 00:05:38.667 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.928 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.928 + diff -u /tmp/62.aRN /tmp/spdk_tgt_config.json.Zxp 00:05:38.928 + echo 'INFO: JSON config files are the same' 00:05:38.928 INFO: JSON config files are the same 00:05:38.928 + rm /tmp/62.aRN /tmp/spdk_tgt_config.json.Zxp 00:05:38.928 + exit 0 00:05:38.928 02:25:12 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:38.928 02:25:12 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.928 INFO: changing configuration and checking if this can be detected... 00:05:38.928 02:25:12 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.928 02:25:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.189 02:25:12 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.189 02:25:12 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:39.189 02:25:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.189 + '[' 2 -ne 2 ']' 00:05:39.189 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.189 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.189 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.189 +++ basename /dev/fd/62 00:05:39.189 ++ mktemp /tmp/62.XXX 00:05:39.189 + tmp_file_1=/tmp/62.FOC 00:05:39.189 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.189 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.189 + tmp_file_2=/tmp/spdk_tgt_config.json.xQT 00:05:39.189 + ret=0 00:05:39.189 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.450 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.450 + diff -u /tmp/62.FOC /tmp/spdk_tgt_config.json.xQT 00:05:39.450 + ret=1 00:05:39.450 + echo '=== Start of file: /tmp/62.FOC ===' 00:05:39.450 + cat /tmp/62.FOC 00:05:39.450 + echo '=== End of file: /tmp/62.FOC ===' 00:05:39.450 + echo '' 00:05:39.450 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xQT ===' 00:05:39.450 + cat /tmp/spdk_tgt_config.json.xQT 00:05:39.450 + echo '=== End of file: /tmp/spdk_tgt_config.json.xQT ===' 00:05:39.450 + echo '' 00:05:39.450 + rm /tmp/62.FOC /tmp/spdk_tgt_config.json.xQT 00:05:39.450 + exit 1 00:05:39.450 02:25:13 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:39.450 INFO: configuration change detected. 00:05:39.450 02:25:13 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:39.450 02:25:13 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:39.450 02:25:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:39.450 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.450 02:25:13 -- json_config/json_config.sh@307 -- # local ret=0 00:05:39.450 02:25:13 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:39.450 02:25:13 -- json_config/json_config.sh@317 -- # [[ -n 4109537 ]] 00:05:39.450 02:25:13 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:39.450 02:25:13 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.450 02:25:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:39.450 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.450 02:25:13 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:39.450 02:25:13 -- json_config/json_config.sh@193 -- # uname -s 00:05:39.450 02:25:13 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:39.450 02:25:13 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:39.450 02:25:13 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:39.450 02:25:13 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.450 02:25:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:39.450 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.712 02:25:13 -- json_config/json_config.sh@323 -- # killprocess 4109537 00:05:39.712 02:25:13 -- common/autotest_common.sh@936 -- # '[' -z 4109537 ']' 00:05:39.712 02:25:13 -- common/autotest_common.sh@940 -- # kill -0 4109537 00:05:39.712 02:25:13 -- common/autotest_common.sh@941 -- # uname 00:05:39.712 02:25:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.712 02:25:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4109537 00:05:39.712 02:25:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.712 02:25:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.712 02:25:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4109537' 00:05:39.712 killing process with pid 4109537 00:05:39.712 02:25:13 -- common/autotest_common.sh@955 -- # kill 4109537 00:05:39.712 02:25:13 -- common/autotest_common.sh@960 -- # wait 4109537 00:05:39.973 02:25:13 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.973 02:25:13 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:39.973 02:25:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:39.973 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.973 02:25:13 -- json_config/json_config.sh@328 -- # return 0 00:05:39.973 02:25:13 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:39.973 INFO: Success 00:05:39.973 00:05:39.973 real 0m6.857s 00:05:39.973 user 0m8.323s 00:05:39.973 sys 0m1.615s 00:05:39.973 02:25:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.973 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.973 ************************************ 00:05:39.973 END TEST json_config 00:05:39.973 ************************************ 00:05:39.973 02:25:13 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.973 02:25:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.973 02:25:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.973 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.235 ************************************ 00:05:40.235 START TEST json_config_extra_key 00:05:40.235 ************************************ 00:05:40.235 02:25:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.235 02:25:13 -- nvmf/common.sh@7 -- # uname -s 00:05:40.235 02:25:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.235 02:25:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.235 02:25:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.235 02:25:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.235 02:25:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.235 02:25:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.235 02:25:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.235 02:25:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.235 02:25:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.235 02:25:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.235 02:25:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:40.235 02:25:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:40.235 02:25:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.235 02:25:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.235 02:25:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.235 02:25:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.235 02:25:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.235 02:25:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.235 02:25:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.235 02:25:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.235 02:25:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.235 02:25:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.235 02:25:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.235 02:25:13 -- paths/export.sh@5 -- # export PATH 00:05:40.235 02:25:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.235 02:25:13 -- nvmf/common.sh@47 -- # : 0 00:05:40.235 02:25:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.235 02:25:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.235 02:25:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.235 02:25:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.235 02:25:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.235 02:25:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.235 02:25:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.235 02:25:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:40.235 02:25:13 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:40.236 02:25:13 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:40.236 02:25:13 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.236 02:25:13 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:40.236 INFO: launching applications... 00:05:40.236 02:25:13 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:40.236 02:25:13 -- json_config/common.sh@9 -- # local app=target 00:05:40.236 02:25:13 -- json_config/common.sh@10 -- # shift 00:05:40.236 02:25:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.236 02:25:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.236 02:25:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.236 02:25:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.236 02:25:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.236 02:25:13 -- json_config/common.sh@22 -- # app_pid["$app"]=4110305 00:05:40.236 02:25:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.236 Waiting for target to run... 00:05:40.236 02:25:13 -- json_config/common.sh@25 -- # waitforlisten 4110305 /var/tmp/spdk_tgt.sock 00:05:40.236 02:25:13 -- common/autotest_common.sh@817 -- # '[' -z 4110305 ']' 00:05:40.236 02:25:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:40.236 02:25:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.236 02:25:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.236 02:25:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.236 02:25:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.236 02:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.236 [2024-04-27 02:25:13.815767] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:40.236 [2024-04-27 02:25:13.815833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110305 ] 00:05:40.236 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.498 [2024-04-27 02:25:14.083889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.759 [2024-04-27 02:25:14.135703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.020 02:25:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.020 02:25:14 -- common/autotest_common.sh@850 -- # return 0 00:05:41.020 02:25:14 -- json_config/common.sh@26 -- # echo '' 00:05:41.020 00:05:41.020 02:25:14 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.020 INFO: shutting down applications... 00:05:41.020 02:25:14 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.020 02:25:14 -- json_config/common.sh@31 -- # local app=target 00:05:41.020 02:25:14 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.020 02:25:14 -- json_config/common.sh@35 -- # [[ -n 4110305 ]] 00:05:41.020 02:25:14 -- json_config/common.sh@38 -- # kill -SIGINT 4110305 00:05:41.020 02:25:14 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.020 02:25:14 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.020 02:25:14 -- json_config/common.sh@41 -- # kill -0 4110305 00:05:41.020 02:25:14 -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.593 02:25:15 -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.593 02:25:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.593 02:25:15 -- json_config/common.sh@41 -- # kill -0 4110305 00:05:41.593 02:25:15 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.593 02:25:15 -- json_config/common.sh@43 -- # break 00:05:41.593 02:25:15 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.593 02:25:15 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.593 SPDK target shutdown done 00:05:41.593 02:25:15 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.593 Success 00:05:41.593 00:05:41.593 real 0m1.431s 00:05:41.593 user 0m1.085s 00:05:41.593 sys 0m0.367s 00:05:41.593 02:25:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.593 02:25:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.593 ************************************ 00:05:41.593 END TEST json_config_extra_key 00:05:41.593 ************************************ 00:05:41.593 02:25:15 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.593 02:25:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.593 02:25:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.593 02:25:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.855 ************************************ 00:05:41.855 START TEST alias_rpc 00:05:41.855 ************************************ 00:05:41.855 02:25:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.855 * Looking for test storage... 00:05:41.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:41.855 02:25:15 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.855 02:25:15 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4110757 00:05:41.855 02:25:15 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4110757 00:05:41.855 02:25:15 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.855 02:25:15 -- common/autotest_common.sh@817 -- # '[' -z 4110757 ']' 00:05:41.855 02:25:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.855 02:25:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.855 02:25:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.855 02:25:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.855 02:25:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.855 [2024-04-27 02:25:15.415439] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:41.855 [2024-04-27 02:25:15.415489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110757 ] 00:05:41.855 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.855 [2024-04-27 02:25:15.474142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.116 [2024-04-27 02:25:15.538196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.689 02:25:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.689 02:25:16 -- common/autotest_common.sh@850 -- # return 0 00:05:42.689 02:25:16 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:42.950 02:25:16 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4110757 00:05:42.950 02:25:16 -- common/autotest_common.sh@936 -- # '[' -z 4110757 ']' 00:05:42.950 02:25:16 -- common/autotest_common.sh@940 -- # kill -0 4110757 00:05:42.950 02:25:16 -- common/autotest_common.sh@941 -- # uname 00:05:42.950 02:25:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.950 02:25:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4110757 00:05:42.950 02:25:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.950 02:25:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.950 02:25:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4110757' 00:05:42.950 killing process with pid 4110757 00:05:42.950 02:25:16 -- common/autotest_common.sh@955 -- # kill 4110757 00:05:42.950 02:25:16 -- common/autotest_common.sh@960 -- # wait 4110757 00:05:43.212 00:05:43.212 real 0m1.339s 00:05:43.212 user 0m1.459s 00:05:43.212 sys 0m0.357s 00:05:43.212 02:25:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.212 02:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 ************************************ 00:05:43.212 END TEST alias_rpc 00:05:43.212 ************************************ 00:05:43.212 02:25:16 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:43.212 02:25:16 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.212 02:25:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.212 02:25:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.212 02:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 ************************************ 00:05:43.212 START TEST spdkcli_tcp 00:05:43.212 ************************************ 00:05:43.212 02:25:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.473 * Looking for test storage... 00:05:43.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:43.473 02:25:16 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:43.473 02:25:16 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:43.473 02:25:16 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:43.473 02:25:16 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.473 02:25:16 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.473 02:25:16 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.473 02:25:16 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.474 02:25:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:43.474 02:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.474 02:25:16 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4111265 00:05:43.474 02:25:16 -- spdkcli/tcp.sh@27 -- # waitforlisten 4111265 00:05:43.474 02:25:16 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.474 02:25:16 -- common/autotest_common.sh@817 -- # '[' -z 4111265 ']' 00:05:43.474 02:25:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.474 02:25:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.474 02:25:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.474 02:25:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.474 02:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.474 [2024-04-27 02:25:16.958948] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:43.474 [2024-04-27 02:25:16.959021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111265 ] 00:05:43.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.474 [2024-04-27 02:25:17.023628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.734 [2024-04-27 02:25:17.096799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.734 [2024-04-27 02:25:17.096805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.307 02:25:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.307 02:25:17 -- common/autotest_common.sh@850 -- # return 0 00:05:44.307 02:25:17 -- spdkcli/tcp.sh@31 -- # socat_pid=4111279 00:05:44.307 02:25:17 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.307 02:25:17 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.307 [ 00:05:44.307 "bdev_malloc_delete", 00:05:44.307 "bdev_malloc_create", 00:05:44.307 "bdev_null_resize", 00:05:44.307 "bdev_null_delete", 00:05:44.307 "bdev_null_create", 00:05:44.307 "bdev_nvme_cuse_unregister", 00:05:44.307 "bdev_nvme_cuse_register", 00:05:44.307 "bdev_opal_new_user", 00:05:44.307 "bdev_opal_set_lock_state", 00:05:44.307 "bdev_opal_delete", 00:05:44.307 "bdev_opal_get_info", 00:05:44.307 "bdev_opal_create", 00:05:44.307 "bdev_nvme_opal_revert", 00:05:44.307 "bdev_nvme_opal_init", 00:05:44.307 "bdev_nvme_send_cmd", 00:05:44.307 "bdev_nvme_get_path_iostat", 00:05:44.307 "bdev_nvme_get_mdns_discovery_info", 00:05:44.307 "bdev_nvme_stop_mdns_discovery", 00:05:44.307 "bdev_nvme_start_mdns_discovery", 00:05:44.307 "bdev_nvme_set_multipath_policy", 00:05:44.307 "bdev_nvme_set_preferred_path", 00:05:44.307 "bdev_nvme_get_io_paths", 00:05:44.307 "bdev_nvme_remove_error_injection", 00:05:44.307 "bdev_nvme_add_error_injection", 00:05:44.307 "bdev_nvme_get_discovery_info", 00:05:44.307 "bdev_nvme_stop_discovery", 00:05:44.307 "bdev_nvme_start_discovery", 00:05:44.307 "bdev_nvme_get_controller_health_info", 00:05:44.307 "bdev_nvme_disable_controller", 00:05:44.307 "bdev_nvme_enable_controller", 00:05:44.307 "bdev_nvme_reset_controller", 00:05:44.307 "bdev_nvme_get_transport_statistics", 00:05:44.307 "bdev_nvme_apply_firmware", 00:05:44.307 "bdev_nvme_detach_controller", 00:05:44.307 "bdev_nvme_get_controllers", 00:05:44.307 "bdev_nvme_attach_controller", 00:05:44.307 "bdev_nvme_set_hotplug", 00:05:44.307 "bdev_nvme_set_options", 00:05:44.307 "bdev_passthru_delete", 00:05:44.307 "bdev_passthru_create", 00:05:44.307 "bdev_lvol_grow_lvstore", 00:05:44.307 "bdev_lvol_get_lvols", 00:05:44.307 "bdev_lvol_get_lvstores", 00:05:44.307 "bdev_lvol_delete", 00:05:44.307 "bdev_lvol_set_read_only", 00:05:44.307 "bdev_lvol_resize", 00:05:44.307 "bdev_lvol_decouple_parent", 00:05:44.307 "bdev_lvol_inflate", 00:05:44.307 "bdev_lvol_rename", 00:05:44.307 "bdev_lvol_clone_bdev", 00:05:44.307 "bdev_lvol_clone", 00:05:44.307 "bdev_lvol_snapshot", 00:05:44.307 "bdev_lvol_create", 00:05:44.307 "bdev_lvol_delete_lvstore", 00:05:44.307 "bdev_lvol_rename_lvstore", 00:05:44.307 "bdev_lvol_create_lvstore", 00:05:44.307 "bdev_raid_set_options", 00:05:44.307 "bdev_raid_remove_base_bdev", 00:05:44.307 "bdev_raid_add_base_bdev", 00:05:44.307 "bdev_raid_delete", 00:05:44.307 "bdev_raid_create", 00:05:44.307 "bdev_raid_get_bdevs", 00:05:44.307 "bdev_error_inject_error", 00:05:44.307 "bdev_error_delete", 00:05:44.307 "bdev_error_create", 00:05:44.307 "bdev_split_delete", 00:05:44.307 "bdev_split_create", 00:05:44.307 "bdev_delay_delete", 00:05:44.307 "bdev_delay_create", 00:05:44.307 "bdev_delay_update_latency", 00:05:44.307 "bdev_zone_block_delete", 00:05:44.307 "bdev_zone_block_create", 00:05:44.307 "blobfs_create", 00:05:44.307 "blobfs_detect", 00:05:44.307 "blobfs_set_cache_size", 00:05:44.307 "bdev_aio_delete", 00:05:44.307 "bdev_aio_rescan", 00:05:44.307 "bdev_aio_create", 00:05:44.307 "bdev_ftl_set_property", 00:05:44.307 "bdev_ftl_get_properties", 00:05:44.307 "bdev_ftl_get_stats", 00:05:44.307 "bdev_ftl_unmap", 00:05:44.307 "bdev_ftl_unload", 00:05:44.307 "bdev_ftl_delete", 00:05:44.307 "bdev_ftl_load", 00:05:44.307 "bdev_ftl_create", 00:05:44.307 "bdev_virtio_attach_controller", 00:05:44.307 "bdev_virtio_scsi_get_devices", 00:05:44.307 "bdev_virtio_detach_controller", 00:05:44.307 "bdev_virtio_blk_set_hotplug", 00:05:44.307 "bdev_iscsi_delete", 00:05:44.307 "bdev_iscsi_create", 00:05:44.307 "bdev_iscsi_set_options", 00:05:44.307 "accel_error_inject_error", 00:05:44.307 "ioat_scan_accel_module", 00:05:44.307 "dsa_scan_accel_module", 00:05:44.307 "iaa_scan_accel_module", 00:05:44.307 "vfu_virtio_create_scsi_endpoint", 00:05:44.307 "vfu_virtio_scsi_remove_target", 00:05:44.307 "vfu_virtio_scsi_add_target", 00:05:44.307 "vfu_virtio_create_blk_endpoint", 00:05:44.307 "vfu_virtio_delete_endpoint", 00:05:44.307 "keyring_file_remove_key", 00:05:44.307 "keyring_file_add_key", 00:05:44.307 "iscsi_get_histogram", 00:05:44.307 "iscsi_enable_histogram", 00:05:44.307 "iscsi_set_options", 00:05:44.307 "iscsi_get_auth_groups", 00:05:44.307 "iscsi_auth_group_remove_secret", 00:05:44.307 "iscsi_auth_group_add_secret", 00:05:44.307 "iscsi_delete_auth_group", 00:05:44.307 "iscsi_create_auth_group", 00:05:44.307 "iscsi_set_discovery_auth", 00:05:44.307 "iscsi_get_options", 00:05:44.307 "iscsi_target_node_request_logout", 00:05:44.307 "iscsi_target_node_set_redirect", 00:05:44.307 "iscsi_target_node_set_auth", 00:05:44.307 "iscsi_target_node_add_lun", 00:05:44.307 "iscsi_get_stats", 00:05:44.307 "iscsi_get_connections", 00:05:44.307 "iscsi_portal_group_set_auth", 00:05:44.307 "iscsi_start_portal_group", 00:05:44.307 "iscsi_delete_portal_group", 00:05:44.307 "iscsi_create_portal_group", 00:05:44.307 "iscsi_get_portal_groups", 00:05:44.307 "iscsi_delete_target_node", 00:05:44.307 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.307 "iscsi_target_node_add_pg_ig_maps", 00:05:44.307 "iscsi_create_target_node", 00:05:44.307 "iscsi_get_target_nodes", 00:05:44.307 "iscsi_delete_initiator_group", 00:05:44.307 "iscsi_initiator_group_remove_initiators", 00:05:44.307 "iscsi_initiator_group_add_initiators", 00:05:44.307 "iscsi_create_initiator_group", 00:05:44.307 "iscsi_get_initiator_groups", 00:05:44.307 "nvmf_set_crdt", 00:05:44.307 "nvmf_set_config", 00:05:44.307 "nvmf_set_max_subsystems", 00:05:44.307 "nvmf_subsystem_get_listeners", 00:05:44.307 "nvmf_subsystem_get_qpairs", 00:05:44.307 "nvmf_subsystem_get_controllers", 00:05:44.307 "nvmf_get_stats", 00:05:44.307 "nvmf_get_transports", 00:05:44.307 "nvmf_create_transport", 00:05:44.307 "nvmf_get_targets", 00:05:44.307 "nvmf_delete_target", 00:05:44.307 "nvmf_create_target", 00:05:44.307 "nvmf_subsystem_allow_any_host", 00:05:44.307 "nvmf_subsystem_remove_host", 00:05:44.307 "nvmf_subsystem_add_host", 00:05:44.307 "nvmf_ns_remove_host", 00:05:44.307 "nvmf_ns_add_host", 00:05:44.307 "nvmf_subsystem_remove_ns", 00:05:44.307 "nvmf_subsystem_add_ns", 00:05:44.307 "nvmf_subsystem_listener_set_ana_state", 00:05:44.307 "nvmf_discovery_get_referrals", 00:05:44.307 "nvmf_discovery_remove_referral", 00:05:44.307 "nvmf_discovery_add_referral", 00:05:44.307 "nvmf_subsystem_remove_listener", 00:05:44.307 "nvmf_subsystem_add_listener", 00:05:44.307 "nvmf_delete_subsystem", 00:05:44.307 "nvmf_create_subsystem", 00:05:44.307 "nvmf_get_subsystems", 00:05:44.307 "env_dpdk_get_mem_stats", 00:05:44.307 "nbd_get_disks", 00:05:44.308 "nbd_stop_disk", 00:05:44.308 "nbd_start_disk", 00:05:44.308 "ublk_recover_disk", 00:05:44.308 "ublk_get_disks", 00:05:44.308 "ublk_stop_disk", 00:05:44.308 "ublk_start_disk", 00:05:44.308 "ublk_destroy_target", 00:05:44.308 "ublk_create_target", 00:05:44.308 "virtio_blk_create_transport", 00:05:44.308 "virtio_blk_get_transports", 00:05:44.308 "vhost_controller_set_coalescing", 00:05:44.308 "vhost_get_controllers", 00:05:44.308 "vhost_delete_controller", 00:05:44.308 "vhost_create_blk_controller", 00:05:44.308 "vhost_scsi_controller_remove_target", 00:05:44.308 "vhost_scsi_controller_add_target", 00:05:44.308 "vhost_start_scsi_controller", 00:05:44.308 "vhost_create_scsi_controller", 00:05:44.308 "thread_set_cpumask", 00:05:44.308 "framework_get_scheduler", 00:05:44.308 "framework_set_scheduler", 00:05:44.308 "framework_get_reactors", 00:05:44.308 "thread_get_io_channels", 00:05:44.308 "thread_get_pollers", 00:05:44.308 "thread_get_stats", 00:05:44.308 "framework_monitor_context_switch", 00:05:44.308 "spdk_kill_instance", 00:05:44.308 "log_enable_timestamps", 00:05:44.308 "log_get_flags", 00:05:44.308 "log_clear_flag", 00:05:44.308 "log_set_flag", 00:05:44.308 "log_get_level", 00:05:44.308 "log_set_level", 00:05:44.308 "log_get_print_level", 00:05:44.308 "log_set_print_level", 00:05:44.308 "framework_enable_cpumask_locks", 00:05:44.308 "framework_disable_cpumask_locks", 00:05:44.308 "framework_wait_init", 00:05:44.308 "framework_start_init", 00:05:44.308 "scsi_get_devices", 00:05:44.308 "bdev_get_histogram", 00:05:44.308 "bdev_enable_histogram", 00:05:44.308 "bdev_set_qos_limit", 00:05:44.308 "bdev_set_qd_sampling_period", 00:05:44.308 "bdev_get_bdevs", 00:05:44.308 "bdev_reset_iostat", 00:05:44.308 "bdev_get_iostat", 00:05:44.308 "bdev_examine", 00:05:44.308 "bdev_wait_for_examine", 00:05:44.308 "bdev_set_options", 00:05:44.308 "notify_get_notifications", 00:05:44.308 "notify_get_types", 00:05:44.308 "accel_get_stats", 00:05:44.308 "accel_set_options", 00:05:44.308 "accel_set_driver", 00:05:44.308 "accel_crypto_key_destroy", 00:05:44.308 "accel_crypto_keys_get", 00:05:44.308 "accel_crypto_key_create", 00:05:44.308 "accel_assign_opc", 00:05:44.308 "accel_get_module_info", 00:05:44.308 "accel_get_opc_assignments", 00:05:44.308 "vmd_rescan", 00:05:44.308 "vmd_remove_device", 00:05:44.308 "vmd_enable", 00:05:44.308 "sock_get_default_impl", 00:05:44.308 "sock_set_default_impl", 00:05:44.308 "sock_impl_set_options", 00:05:44.308 "sock_impl_get_options", 00:05:44.308 "iobuf_get_stats", 00:05:44.308 "iobuf_set_options", 00:05:44.308 "keyring_get_keys", 00:05:44.308 "framework_get_pci_devices", 00:05:44.308 "framework_get_config", 00:05:44.308 "framework_get_subsystems", 00:05:44.308 "vfu_tgt_set_base_path", 00:05:44.308 "trace_get_info", 00:05:44.308 "trace_get_tpoint_group_mask", 00:05:44.308 "trace_disable_tpoint_group", 00:05:44.308 "trace_enable_tpoint_group", 00:05:44.308 "trace_clear_tpoint_mask", 00:05:44.308 "trace_set_tpoint_mask", 00:05:44.308 "spdk_get_version", 00:05:44.308 "rpc_get_methods" 00:05:44.308 ] 00:05:44.308 02:25:17 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.308 02:25:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:44.308 02:25:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 02:25:17 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.569 02:25:17 -- spdkcli/tcp.sh@38 -- # killprocess 4111265 00:05:44.569 02:25:17 -- common/autotest_common.sh@936 -- # '[' -z 4111265 ']' 00:05:44.569 02:25:17 -- common/autotest_common.sh@940 -- # kill -0 4111265 00:05:44.569 02:25:17 -- common/autotest_common.sh@941 -- # uname 00:05:44.569 02:25:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.569 02:25:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4111265 00:05:44.569 02:25:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.569 02:25:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.569 02:25:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4111265' 00:05:44.569 killing process with pid 4111265 00:05:44.569 02:25:17 -- common/autotest_common.sh@955 -- # kill 4111265 00:05:44.569 02:25:17 -- common/autotest_common.sh@960 -- # wait 4111265 00:05:44.831 00:05:44.831 real 0m1.410s 00:05:44.831 user 0m2.586s 00:05:44.831 sys 0m0.422s 00:05:44.831 02:25:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.831 02:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.831 ************************************ 00:05:44.831 END TEST spdkcli_tcp 00:05:44.831 ************************************ 00:05:44.831 02:25:18 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.831 02:25:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.831 02:25:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.831 02:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.831 ************************************ 00:05:44.831 START TEST dpdk_mem_utility 00:05:44.831 ************************************ 00:05:44.831 02:25:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.092 * Looking for test storage... 00:05:45.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.092 02:25:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.092 02:25:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4111635 00:05:45.092 02:25:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4111635 00:05:45.092 02:25:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.092 02:25:18 -- common/autotest_common.sh@817 -- # '[' -z 4111635 ']' 00:05:45.092 02:25:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.092 02:25:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.092 02:25:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.092 02:25:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.092 02:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.092 [2024-04-27 02:25:18.543785] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:45.093 [2024-04-27 02:25:18.543845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111635 ] 00:05:45.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.093 [2024-04-27 02:25:18.608152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.093 [2024-04-27 02:25:18.673803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.038 02:25:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.038 02:25:19 -- common/autotest_common.sh@850 -- # return 0 00:05:46.038 02:25:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.038 02:25:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.039 02:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.039 02:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:46.039 { 00:05:46.039 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.039 } 00:05:46.039 02:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.039 02:25:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.039 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:46.039 1 heaps totaling size 814.000000 MiB 00:05:46.039 size: 814.000000 MiB heap id: 0 00:05:46.039 end heaps---------- 00:05:46.039 8 mempools totaling size 598.116089 MiB 00:05:46.039 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.039 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.039 size: 84.521057 MiB name: bdev_io_4111635 00:05:46.039 size: 51.011292 MiB name: evtpool_4111635 00:05:46.039 size: 50.003479 MiB name: msgpool_4111635 00:05:46.039 size: 21.763794 MiB name: PDU_Pool 00:05:46.039 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.039 size: 0.026123 MiB name: Session_Pool 00:05:46.039 end mempools------- 00:05:46.039 6 memzones totaling size 4.142822 MiB 00:05:46.039 size: 1.000366 MiB name: RG_ring_0_4111635 00:05:46.039 size: 1.000366 MiB name: RG_ring_1_4111635 00:05:46.039 size: 1.000366 MiB name: RG_ring_4_4111635 00:05:46.039 size: 1.000366 MiB name: RG_ring_5_4111635 00:05:46.039 size: 0.125366 MiB name: RG_ring_2_4111635 00:05:46.039 size: 0.015991 MiB name: RG_ring_3_4111635 00:05:46.039 end memzones------- 00:05:46.039 02:25:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.039 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:46.039 list of free elements. size: 12.519348 MiB 00:05:46.039 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:46.039 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:46.039 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:46.039 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:46.039 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:46.039 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:46.039 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:46.039 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:46.039 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:46.039 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:46.039 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:46.039 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:46.039 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:46.039 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:46.039 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:46.039 list of standard malloc elements. size: 199.218079 MiB 00:05:46.039 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:46.039 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:46.039 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:46.039 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:46.039 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.039 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.039 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:46.039 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.039 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:46.039 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:46.039 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:46.039 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:46.039 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:46.039 list of memzone associated elements. size: 602.262573 MiB 00:05:46.039 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:46.039 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.039 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:46.039 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.039 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:46.039 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4111635_0 00:05:46.039 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:46.039 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4111635_0 00:05:46.039 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:46.039 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4111635_0 00:05:46.039 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:46.039 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.039 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:46.039 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.039 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:46.039 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4111635 00:05:46.040 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:46.040 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4111635 00:05:46.040 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.040 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4111635 00:05:46.040 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:46.040 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.040 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:46.040 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.040 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:46.040 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.040 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:46.040 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.040 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:46.040 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4111635 00:05:46.040 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:46.040 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4111635 00:05:46.040 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:46.040 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4111635 00:05:46.040 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:46.040 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4111635 00:05:46.040 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:46.040 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4111635 00:05:46.040 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:46.040 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.040 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:46.040 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.040 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:46.040 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.040 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:46.040 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4111635 00:05:46.040 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:46.040 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.040 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:46.040 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.040 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:46.040 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4111635 00:05:46.040 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:46.040 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.040 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:46.040 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4111635 00:05:46.040 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:46.040 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4111635 00:05:46.040 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:46.040 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.040 02:25:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.040 02:25:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4111635 00:05:46.040 02:25:19 -- common/autotest_common.sh@936 -- # '[' -z 4111635 ']' 00:05:46.040 02:25:19 -- common/autotest_common.sh@940 -- # kill -0 4111635 00:05:46.040 02:25:19 -- common/autotest_common.sh@941 -- # uname 00:05:46.040 02:25:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.040 02:25:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4111635 00:05:46.040 02:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.040 02:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.040 02:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4111635' 00:05:46.040 killing process with pid 4111635 00:05:46.040 02:25:19 -- common/autotest_common.sh@955 -- # kill 4111635 00:05:46.040 02:25:19 -- common/autotest_common.sh@960 -- # wait 4111635 00:05:46.301 00:05:46.301 real 0m1.307s 00:05:46.301 user 0m1.365s 00:05:46.301 sys 0m0.399s 00:05:46.301 02:25:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.301 02:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:46.301 ************************************ 00:05:46.301 END TEST dpdk_mem_utility 00:05:46.301 ************************************ 00:05:46.301 02:25:19 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.302 02:25:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.302 02:25:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.302 02:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:46.302 ************************************ 00:05:46.302 START TEST event 00:05:46.302 ************************************ 00:05:46.302 02:25:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.563 * Looking for test storage... 00:05:46.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.563 02:25:19 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.563 02:25:19 -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.563 02:25:19 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.563 02:25:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:46.563 02:25:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.563 02:25:19 -- common/autotest_common.sh@10 -- # set +x 00:05:46.563 ************************************ 00:05:46.563 START TEST event_perf 00:05:46.563 ************************************ 00:05:46.563 02:25:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.563 Running I/O for 1 seconds...[2024-04-27 02:25:20.144305] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:46.563 [2024-04-27 02:25:20.144411] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112021 ] 00:05:46.563 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.824 [2024-04-27 02:25:20.210629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.824 [2024-04-27 02:25:20.285716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.824 [2024-04-27 02:25:20.285834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.824 [2024-04-27 02:25:20.285962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.824 [2024-04-27 02:25:20.285965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.766 Running I/O for 1 seconds... 00:05:47.766 lcore 0: 168877 00:05:47.766 lcore 1: 168879 00:05:47.766 lcore 2: 168877 00:05:47.766 lcore 3: 168880 00:05:47.766 done. 00:05:47.766 00:05:47.766 real 0m1.217s 00:05:47.766 user 0m4.134s 00:05:47.766 sys 0m0.081s 00:05:47.766 02:25:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.766 02:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:47.766 ************************************ 00:05:47.766 END TEST event_perf 00:05:47.766 ************************************ 00:05:47.766 02:25:21 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.767 02:25:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:47.767 02:25:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.767 02:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:48.028 ************************************ 00:05:48.028 START TEST event_reactor 00:05:48.028 ************************************ 00:05:48.028 02:25:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.028 [2024-04-27 02:25:21.517264] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:48.028 [2024-04-27 02:25:21.517370] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112201 ] 00:05:48.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.028 [2024-04-27 02:25:21.580675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.028 [2024-04-27 02:25:21.644472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.415 test_start 00:05:49.415 oneshot 00:05:49.415 tick 100 00:05:49.415 tick 100 00:05:49.415 tick 250 00:05:49.415 tick 100 00:05:49.415 tick 100 00:05:49.415 tick 100 00:05:49.415 tick 250 00:05:49.415 tick 500 00:05:49.415 tick 100 00:05:49.415 tick 100 00:05:49.415 tick 250 00:05:49.415 tick 100 00:05:49.415 tick 100 00:05:49.415 test_end 00:05:49.415 00:05:49.415 real 0m1.200s 00:05:49.415 user 0m1.132s 00:05:49.415 sys 0m0.064s 00:05:49.415 02:25:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.415 02:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 ************************************ 00:05:49.415 END TEST event_reactor 00:05:49.415 ************************************ 00:05:49.415 02:25:22 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.415 02:25:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:49.415 02:25:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.415 02:25:22 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 ************************************ 00:05:49.415 START TEST event_reactor_perf 00:05:49.415 ************************************ 00:05:49.415 02:25:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.415 [2024-04-27 02:25:22.844134] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:49.415 [2024-04-27 02:25:22.844223] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112489 ] 00:05:49.415 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.415 [2024-04-27 02:25:22.906642] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.415 [2024-04-27 02:25:22.968466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.801 test_start 00:05:50.801 test_end 00:05:50.801 Performance: 366837 events per second 00:05:50.801 00:05:50.801 real 0m1.198s 00:05:50.801 user 0m1.129s 00:05:50.801 sys 0m0.065s 00:05:50.801 02:25:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.801 02:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.801 ************************************ 00:05:50.801 END TEST event_reactor_perf 00:05:50.801 ************************************ 00:05:50.801 02:25:24 -- event/event.sh@49 -- # uname -s 00:05:50.801 02:25:24 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.801 02:25:24 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.801 02:25:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.801 02:25:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.801 02:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.801 ************************************ 00:05:50.801 START TEST event_scheduler 00:05:50.801 ************************************ 00:05:50.801 02:25:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.801 * Looking for test storage... 00:05:50.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:50.801 02:25:24 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.801 02:25:24 -- scheduler/scheduler.sh@35 -- # scheduler_pid=4112880 00:05:50.801 02:25:24 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.801 02:25:24 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.801 02:25:24 -- scheduler/scheduler.sh@37 -- # waitforlisten 4112880 00:05:50.801 02:25:24 -- common/autotest_common.sh@817 -- # '[' -z 4112880 ']' 00:05:50.801 02:25:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.801 02:25:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.801 02:25:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.801 02:25:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.801 02:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.801 [2024-04-27 02:25:24.337896] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:50.801 [2024-04-27 02:25:24.337963] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112880 ] 00:05:50.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.801 [2024-04-27 02:25:24.394978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.062 [2024-04-27 02:25:24.458549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.062 [2024-04-27 02:25:24.458672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.062 [2024-04-27 02:25:24.458802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.062 [2024-04-27 02:25:24.458804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.634 02:25:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.634 02:25:25 -- common/autotest_common.sh@850 -- # return 0 00:05:51.634 02:25:25 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.634 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.634 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.634 POWER: Env isn't set yet! 00:05:51.634 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:51.634 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.634 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.634 POWER: Attempting to initialise PSTAT power management... 00:05:51.634 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:51.634 POWER: Initialized successfully for lcore 0 power management 00:05:51.634 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:51.634 POWER: Initialized successfully for lcore 1 power management 00:05:51.634 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:51.634 POWER: Initialized successfully for lcore 2 power management 00:05:51.634 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:51.634 POWER: Initialized successfully for lcore 3 power management 00:05:51.634 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.634 02:25:25 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.634 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.634 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.634 [2024-04-27 02:25:25.248860] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.634 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.634 02:25:25 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.634 02:25:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.635 02:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.635 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 ************************************ 00:05:51.895 START TEST scheduler_create_thread 00:05:51.895 ************************************ 00:05:51.895 02:25:25 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:51.895 02:25:25 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.895 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.895 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 2 00:05:51.895 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.895 02:25:25 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.895 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.895 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 3 00:05:51.895 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.895 02:25:25 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.895 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.895 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 4 00:05:51.895 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.895 02:25:25 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.895 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.895 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 5 00:05:51.895 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.895 02:25:25 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.895 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.896 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 6 00:05:51.896 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.896 02:25:25 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.896 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.896 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 7 00:05:51.896 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.896 02:25:25 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.896 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.896 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 8 00:05:51.896 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.896 02:25:25 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.896 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.896 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.896 9 00:05:51.896 02:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:51.896 02:25:25 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.896 02:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:51.896 02:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:53.283 10 00:05:53.283 02:25:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.283 02:25:26 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:53.283 02:25:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.283 02:25:26 -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 02:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.670 02:25:28 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.670 02:25:28 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.670 02:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.670 02:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 02:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.557 02:25:28 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:55.557 02:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.558 02:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:56.500 02:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:56.500 02:25:29 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.500 02:25:29 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.500 02:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:56.500 02:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:57.093 02:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.093 00:05:57.093 real 0m5.199s 00:05:57.093 user 0m0.025s 00:05:57.093 sys 0m0.005s 00:05:57.093 02:25:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.093 02:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.093 ************************************ 00:05:57.093 END TEST scheduler_create_thread 00:05:57.093 ************************************ 00:05:57.093 02:25:30 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.093 02:25:30 -- scheduler/scheduler.sh@46 -- # killprocess 4112880 00:05:57.093 02:25:30 -- common/autotest_common.sh@936 -- # '[' -z 4112880 ']' 00:05:57.093 02:25:30 -- common/autotest_common.sh@940 -- # kill -0 4112880 00:05:57.093 02:25:30 -- common/autotest_common.sh@941 -- # uname 00:05:57.093 02:25:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.093 02:25:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4112880 00:05:57.093 02:25:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:57.093 02:25:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:57.093 02:25:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4112880' 00:05:57.093 killing process with pid 4112880 00:05:57.093 02:25:30 -- common/autotest_common.sh@955 -- # kill 4112880 00:05:57.093 02:25:30 -- common/autotest_common.sh@960 -- # wait 4112880 00:05:57.358 [2024-04-27 02:25:30.876207] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:57.619 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:57.619 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:57.619 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:57.619 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:57.619 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:57.619 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:57.619 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:57.619 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:57.619 00:05:57.619 real 0m6.860s 00:05:57.619 user 0m14.716s 00:05:57.619 sys 0m0.423s 00:05:57.619 02:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.619 02:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.619 ************************************ 00:05:57.619 END TEST event_scheduler 00:05:57.619 ************************************ 00:05:57.619 02:25:31 -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.619 02:25:31 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.619 02:25:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.619 02:25:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.619 02:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.880 ************************************ 00:05:57.880 START TEST app_repeat 00:05:57.880 ************************************ 00:05:57.880 02:25:31 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:57.880 02:25:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.880 02:25:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.880 02:25:31 -- event/event.sh@13 -- # local nbd_list 00:05:57.880 02:25:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.880 02:25:31 -- event/event.sh@14 -- # local bdev_list 00:05:57.880 02:25:31 -- event/event.sh@15 -- # local repeat_times=4 00:05:57.880 02:25:31 -- event/event.sh@17 -- # modprobe nbd 00:05:57.880 02:25:31 -- event/event.sh@19 -- # repeat_pid=4114294 00:05:57.880 02:25:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.880 02:25:31 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.880 02:25:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4114294' 00:05:57.880 Process app_repeat pid: 4114294 00:05:57.880 02:25:31 -- event/event.sh@23 -- # for i in {0..2} 00:05:57.880 02:25:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.880 spdk_app_start Round 0 00:05:57.880 02:25:31 -- event/event.sh@25 -- # waitforlisten 4114294 /var/tmp/spdk-nbd.sock 00:05:57.880 02:25:31 -- common/autotest_common.sh@817 -- # '[' -z 4114294 ']' 00:05:57.880 02:25:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.880 02:25:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.880 02:25:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.880 02:25:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.880 02:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.880 [2024-04-27 02:25:31.282299] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:05:57.880 [2024-04-27 02:25:31.282393] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114294 ] 00:05:57.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.880 [2024-04-27 02:25:31.348970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.880 [2024-04-27 02:25:31.423084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.880 [2024-04-27 02:25:31.423090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.454 02:25:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.454 02:25:32 -- common/autotest_common.sh@850 -- # return 0 00:05:58.454 02:25:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.715 Malloc0 00:05:58.715 02:25:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.976 Malloc1 00:05:58.976 02:25:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.976 02:25:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.976 02:25:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.976 02:25:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.976 02:25:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.976 02:25:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.977 /dev/nbd0 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.977 02:25:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:58.977 02:25:32 -- common/autotest_common.sh@855 -- # local i 00:05:58.977 02:25:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:58.977 02:25:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:58.977 02:25:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:58.977 02:25:32 -- common/autotest_common.sh@859 -- # break 00:05:58.977 02:25:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:58.977 02:25:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:58.977 02:25:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.977 1+0 records in 00:05:58.977 1+0 records out 00:05:58.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234715 s, 17.5 MB/s 00:05:58.977 02:25:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.977 02:25:32 -- common/autotest_common.sh@872 -- # size=4096 00:05:58.977 02:25:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.977 02:25:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:58.977 02:25:32 -- common/autotest_common.sh@875 -- # return 0 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.977 02:25:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.238 /dev/nbd1 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.238 02:25:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:59.238 02:25:32 -- common/autotest_common.sh@855 -- # local i 00:05:59.238 02:25:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:59.238 02:25:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:59.238 02:25:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:59.238 02:25:32 -- common/autotest_common.sh@859 -- # break 00:05:59.238 02:25:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:59.238 02:25:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:59.238 02:25:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.238 1+0 records in 00:05:59.238 1+0 records out 00:05:59.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242713 s, 16.9 MB/s 00:05:59.238 02:25:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.238 02:25:32 -- common/autotest_common.sh@872 -- # size=4096 00:05:59.238 02:25:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.238 02:25:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:59.238 02:25:32 -- common/autotest_common.sh@875 -- # return 0 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.238 02:25:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.499 { 00:05:59.499 "nbd_device": "/dev/nbd0", 00:05:59.499 "bdev_name": "Malloc0" 00:05:59.499 }, 00:05:59.499 { 00:05:59.499 "nbd_device": "/dev/nbd1", 00:05:59.499 "bdev_name": "Malloc1" 00:05:59.499 } 00:05:59.499 ]' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.499 { 00:05:59.499 "nbd_device": "/dev/nbd0", 00:05:59.499 "bdev_name": "Malloc0" 00:05:59.499 }, 00:05:59.499 { 00:05:59.499 "nbd_device": "/dev/nbd1", 00:05:59.499 "bdev_name": "Malloc1" 00:05:59.499 } 00:05:59.499 ]' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.499 /dev/nbd1' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.499 /dev/nbd1' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.499 256+0 records in 00:05:59.499 256+0 records out 00:05:59.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124637 s, 84.1 MB/s 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.499 256+0 records in 00:05:59.499 256+0 records out 00:05:59.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158138 s, 66.3 MB/s 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.499 02:25:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.499 256+0 records in 00:05:59.499 256+0 records out 00:05:59.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172702 s, 60.7 MB/s 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@51 -- # local i 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.499 02:25:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@41 -- # break 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@41 -- # break 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.829 02:25:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@65 -- # true 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.090 02:25:33 -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.090 02:25:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.350 02:25:33 -- event/event.sh@35 -- # sleep 3 00:06:00.350 [2024-04-27 02:25:33.856802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.350 [2024-04-27 02:25:33.919091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.350 [2024-04-27 02:25:33.919096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.350 [2024-04-27 02:25:33.950919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.350 [2024-04-27 02:25:33.950952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.647 02:25:36 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.647 02:25:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:03.647 spdk_app_start Round 1 00:06:03.647 02:25:36 -- event/event.sh@25 -- # waitforlisten 4114294 /var/tmp/spdk-nbd.sock 00:06:03.647 02:25:36 -- common/autotest_common.sh@817 -- # '[' -z 4114294 ']' 00:06:03.647 02:25:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.647 02:25:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.647 02:25:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.647 02:25:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.647 02:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.647 02:25:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.647 02:25:36 -- common/autotest_common.sh@850 -- # return 0 00:06:03.647 02:25:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.647 Malloc0 00:06:03.647 02:25:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.647 Malloc1 00:06:03.647 02:25:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@12 -- # local i 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.647 02:25:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.909 /dev/nbd0 00:06:03.909 02:25:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.909 02:25:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.909 02:25:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:03.909 02:25:37 -- common/autotest_common.sh@855 -- # local i 00:06:03.909 02:25:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:03.909 02:25:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:03.909 02:25:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:03.909 02:25:37 -- common/autotest_common.sh@859 -- # break 00:06:03.909 02:25:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:03.909 02:25:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:03.909 02:25:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.909 1+0 records in 00:06:03.909 1+0 records out 00:06:03.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236741 s, 17.3 MB/s 00:06:03.909 02:25:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.909 02:25:37 -- common/autotest_common.sh@872 -- # size=4096 00:06:03.909 02:25:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.909 02:25:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:03.909 02:25:37 -- common/autotest_common.sh@875 -- # return 0 00:06:03.909 02:25:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.909 02:25:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.909 02:25:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.909 /dev/nbd1 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.171 02:25:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:04.171 02:25:37 -- common/autotest_common.sh@855 -- # local i 00:06:04.171 02:25:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:04.171 02:25:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:04.171 02:25:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:04.171 02:25:37 -- common/autotest_common.sh@859 -- # break 00:06:04.171 02:25:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:04.171 02:25:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:04.171 02:25:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.171 1+0 records in 00:06:04.171 1+0 records out 00:06:04.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367536 s, 11.1 MB/s 00:06:04.171 02:25:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.171 02:25:37 -- common/autotest_common.sh@872 -- # size=4096 00:06:04.171 02:25:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.171 02:25:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:04.171 02:25:37 -- common/autotest_common.sh@875 -- # return 0 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.171 { 00:06:04.171 "nbd_device": "/dev/nbd0", 00:06:04.171 "bdev_name": "Malloc0" 00:06:04.171 }, 00:06:04.171 { 00:06:04.171 "nbd_device": "/dev/nbd1", 00:06:04.171 "bdev_name": "Malloc1" 00:06:04.171 } 00:06:04.171 ]' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.171 { 00:06:04.171 "nbd_device": "/dev/nbd0", 00:06:04.171 "bdev_name": "Malloc0" 00:06:04.171 }, 00:06:04.171 { 00:06:04.171 "nbd_device": "/dev/nbd1", 00:06:04.171 "bdev_name": "Malloc1" 00:06:04.171 } 00:06:04.171 ]' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.171 /dev/nbd1' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.171 /dev/nbd1' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.171 256+0 records in 00:06:04.171 256+0 records out 00:06:04.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124699 s, 84.1 MB/s 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.171 02:25:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.433 256+0 records in 00:06:04.433 256+0 records out 00:06:04.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162769 s, 64.4 MB/s 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.433 256+0 records in 00:06:04.433 256+0 records out 00:06:04.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185054 s, 56.7 MB/s 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.433 02:25:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@51 -- # local i 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.434 02:25:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@41 -- # break 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.434 02:25:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@41 -- # break 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.695 02:25:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@65 -- # true 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.956 02:25:38 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.956 02:25:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.956 02:25:38 -- event/event.sh@35 -- # sleep 3 00:06:05.217 [2024-04-27 02:25:38.669869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.217 [2024-04-27 02:25:38.733179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.217 [2024-04-27 02:25:38.733184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.217 [2024-04-27 02:25:38.765956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.217 [2024-04-27 02:25:38.765988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.520 02:25:41 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.520 02:25:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.520 spdk_app_start Round 2 00:06:08.520 02:25:41 -- event/event.sh@25 -- # waitforlisten 4114294 /var/tmp/spdk-nbd.sock 00:06:08.520 02:25:41 -- common/autotest_common.sh@817 -- # '[' -z 4114294 ']' 00:06:08.520 02:25:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.520 02:25:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.520 02:25:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.520 02:25:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.520 02:25:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.520 02:25:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.520 02:25:41 -- common/autotest_common.sh@850 -- # return 0 00:06:08.520 02:25:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.520 Malloc0 00:06:08.520 02:25:41 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.520 Malloc1 00:06:08.520 02:25:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.520 02:25:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.782 /dev/nbd0 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.782 02:25:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:08.782 02:25:42 -- common/autotest_common.sh@855 -- # local i 00:06:08.782 02:25:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:08.782 02:25:42 -- common/autotest_common.sh@859 -- # break 00:06:08.782 02:25:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.782 1+0 records in 00:06:08.782 1+0 records out 00:06:08.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236829 s, 17.3 MB/s 00:06:08.782 02:25:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.782 02:25:42 -- common/autotest_common.sh@872 -- # size=4096 00:06:08.782 02:25:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.782 02:25:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:08.782 02:25:42 -- common/autotest_common.sh@875 -- # return 0 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.782 /dev/nbd1 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.782 02:25:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:08.782 02:25:42 -- common/autotest_common.sh@855 -- # local i 00:06:08.782 02:25:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:08.782 02:25:42 -- common/autotest_common.sh@859 -- # break 00:06:08.782 02:25:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:08.782 02:25:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.782 1+0 records in 00:06:08.782 1+0 records out 00:06:08.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266146 s, 15.4 MB/s 00:06:08.782 02:25:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.782 02:25:42 -- common/autotest_common.sh@872 -- # size=4096 00:06:08.782 02:25:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.782 02:25:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:08.782 02:25:42 -- common/autotest_common.sh@875 -- # return 0 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.782 02:25:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.044 { 00:06:09.044 "nbd_device": "/dev/nbd0", 00:06:09.044 "bdev_name": "Malloc0" 00:06:09.044 }, 00:06:09.044 { 00:06:09.044 "nbd_device": "/dev/nbd1", 00:06:09.044 "bdev_name": "Malloc1" 00:06:09.044 } 00:06:09.044 ]' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.044 { 00:06:09.044 "nbd_device": "/dev/nbd0", 00:06:09.044 "bdev_name": "Malloc0" 00:06:09.044 }, 00:06:09.044 { 00:06:09.044 "nbd_device": "/dev/nbd1", 00:06:09.044 "bdev_name": "Malloc1" 00:06:09.044 } 00:06:09.044 ]' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.044 /dev/nbd1' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.044 /dev/nbd1' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.044 256+0 records in 00:06:09.044 256+0 records out 00:06:09.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012455 s, 84.2 MB/s 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.044 256+0 records in 00:06:09.044 256+0 records out 00:06:09.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177208 s, 59.2 MB/s 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.044 256+0 records in 00:06:09.044 256+0 records out 00:06:09.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179268 s, 58.5 MB/s 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.044 02:25:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@41 -- # break 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.307 02:25:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@41 -- # break 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.566 02:25:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@65 -- # true 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.566 02:25:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.826 02:25:43 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.826 02:25:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.826 02:25:43 -- event/event.sh@35 -- # sleep 3 00:06:10.087 [2024-04-27 02:25:43.477135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.087 [2024-04-27 02:25:43.539305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.087 [2024-04-27 02:25:43.539309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.087 [2024-04-27 02:25:43.571493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.087 [2024-04-27 02:25:43.571531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.388 02:25:46 -- event/event.sh@38 -- # waitforlisten 4114294 /var/tmp/spdk-nbd.sock 00:06:13.388 02:25:46 -- common/autotest_common.sh@817 -- # '[' -z 4114294 ']' 00:06:13.388 02:25:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.388 02:25:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.388 02:25:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.388 02:25:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.388 02:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.388 02:25:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.388 02:25:46 -- common/autotest_common.sh@850 -- # return 0 00:06:13.388 02:25:46 -- event/event.sh@39 -- # killprocess 4114294 00:06:13.388 02:25:46 -- common/autotest_common.sh@936 -- # '[' -z 4114294 ']' 00:06:13.388 02:25:46 -- common/autotest_common.sh@940 -- # kill -0 4114294 00:06:13.388 02:25:46 -- common/autotest_common.sh@941 -- # uname 00:06:13.388 02:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.388 02:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4114294 00:06:13.388 02:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.388 02:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.388 02:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4114294' 00:06:13.388 killing process with pid 4114294 00:06:13.388 02:25:46 -- common/autotest_common.sh@955 -- # kill 4114294 00:06:13.388 02:25:46 -- common/autotest_common.sh@960 -- # wait 4114294 00:06:13.388 spdk_app_start is called in Round 0. 00:06:13.388 Shutdown signal received, stop current app iteration 00:06:13.388 Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 reinitialization... 00:06:13.388 spdk_app_start is called in Round 1. 00:06:13.388 Shutdown signal received, stop current app iteration 00:06:13.388 Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 reinitialization... 00:06:13.388 spdk_app_start is called in Round 2. 00:06:13.388 Shutdown signal received, stop current app iteration 00:06:13.388 Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 reinitialization... 00:06:13.388 spdk_app_start is called in Round 3. 00:06:13.388 Shutdown signal received, stop current app iteration 00:06:13.388 02:25:46 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.388 02:25:46 -- event/event.sh@42 -- # return 0 00:06:13.388 00:06:13.388 real 0m15.425s 00:06:13.388 user 0m33.324s 00:06:13.388 sys 0m2.058s 00:06:13.388 02:25:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.388 02:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.388 ************************************ 00:06:13.388 END TEST app_repeat 00:06:13.388 ************************************ 00:06:13.388 02:25:46 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.388 02:25:46 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.388 02:25:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.388 02:25:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.388 02:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.388 ************************************ 00:06:13.388 START TEST cpu_locks 00:06:13.388 ************************************ 00:06:13.388 02:25:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.388 * Looking for test storage... 00:06:13.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.388 02:25:46 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.388 02:25:46 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.388 02:25:46 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.388 02:25:46 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.388 02:25:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.388 02:25:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.388 02:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.649 ************************************ 00:06:13.649 START TEST default_locks 00:06:13.649 ************************************ 00:06:13.649 02:25:47 -- common/autotest_common.sh@1111 -- # default_locks 00:06:13.649 02:25:47 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4117719 00:06:13.649 02:25:47 -- event/cpu_locks.sh@47 -- # waitforlisten 4117719 00:06:13.649 02:25:47 -- common/autotest_common.sh@817 -- # '[' -z 4117719 ']' 00:06:13.649 02:25:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.649 02:25:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.649 02:25:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.649 02:25:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.649 02:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:13.649 02:25:47 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.649 [2024-04-27 02:25:47.097983] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:13.649 [2024-04-27 02:25:47.098032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117719 ] 00:06:13.649 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.649 [2024-04-27 02:25:47.157991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.649 [2024-04-27 02:25:47.224931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.592 02:25:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.592 02:25:47 -- common/autotest_common.sh@850 -- # return 0 00:06:14.592 02:25:47 -- event/cpu_locks.sh@49 -- # locks_exist 4117719 00:06:14.592 02:25:47 -- event/cpu_locks.sh@22 -- # lslocks -p 4117719 00:06:14.592 02:25:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.592 lslocks: write error 00:06:14.592 02:25:47 -- event/cpu_locks.sh@50 -- # killprocess 4117719 00:06:14.592 02:25:47 -- common/autotest_common.sh@936 -- # '[' -z 4117719 ']' 00:06:14.592 02:25:47 -- common/autotest_common.sh@940 -- # kill -0 4117719 00:06:14.592 02:25:47 -- common/autotest_common.sh@941 -- # uname 00:06:14.592 02:25:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.592 02:25:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4117719 00:06:14.592 02:25:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.592 02:25:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.592 02:25:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4117719' 00:06:14.592 killing process with pid 4117719 00:06:14.592 02:25:48 -- common/autotest_common.sh@955 -- # kill 4117719 00:06:14.592 02:25:48 -- common/autotest_common.sh@960 -- # wait 4117719 00:06:14.853 02:25:48 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4117719 00:06:14.853 02:25:48 -- common/autotest_common.sh@638 -- # local es=0 00:06:14.853 02:25:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 4117719 00:06:14.853 02:25:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:14.853 02:25:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.853 02:25:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:14.853 02:25:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.853 02:25:48 -- common/autotest_common.sh@641 -- # waitforlisten 4117719 00:06:14.853 02:25:48 -- common/autotest_common.sh@817 -- # '[' -z 4117719 ']' 00:06:14.853 02:25:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.853 02:25:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:14.853 02:25:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.853 02:25:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:14.853 02:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (4117719) - No such process 00:06:14.853 ERROR: process (pid: 4117719) is no longer running 00:06:14.853 02:25:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.853 02:25:48 -- common/autotest_common.sh@850 -- # return 1 00:06:14.853 02:25:48 -- common/autotest_common.sh@641 -- # es=1 00:06:14.853 02:25:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:14.853 02:25:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:14.853 02:25:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:14.853 02:25:48 -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.853 02:25:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.853 02:25:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.853 02:25:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.853 00:06:14.853 real 0m1.202s 00:06:14.853 user 0m1.312s 00:06:14.853 sys 0m0.344s 00:06:14.853 02:25:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.853 02:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.853 ************************************ 00:06:14.853 END TEST default_locks 00:06:14.853 ************************************ 00:06:14.853 02:25:48 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.853 02:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.853 02:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.853 02:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.853 ************************************ 00:06:14.853 START TEST default_locks_via_rpc 00:06:14.853 ************************************ 00:06:14.853 02:25:48 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:14.853 02:25:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4117952 00:06:14.853 02:25:48 -- event/cpu_locks.sh@63 -- # waitforlisten 4117952 00:06:14.853 02:25:48 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.853 02:25:48 -- common/autotest_common.sh@817 -- # '[' -z 4117952 ']' 00:06:14.853 02:25:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.853 02:25:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:14.853 02:25:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.853 02:25:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:14.853 02:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.115 [2024-04-27 02:25:48.480063] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:15.115 [2024-04-27 02:25:48.480116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117952 ] 00:06:15.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.115 [2024-04-27 02:25:48.541474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.115 [2024-04-27 02:25:48.606485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.688 02:25:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:15.688 02:25:49 -- common/autotest_common.sh@850 -- # return 0 00:06:15.688 02:25:49 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.688 02:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.688 02:25:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.688 02:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.688 02:25:49 -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.688 02:25:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.688 02:25:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.688 02:25:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.688 02:25:49 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.688 02:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:15.688 02:25:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.688 02:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:15.688 02:25:49 -- event/cpu_locks.sh@71 -- # locks_exist 4117952 00:06:15.688 02:25:49 -- event/cpu_locks.sh@22 -- # lslocks -p 4117952 00:06:15.688 02:25:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.259 02:25:49 -- event/cpu_locks.sh@73 -- # killprocess 4117952 00:06:16.259 02:25:49 -- common/autotest_common.sh@936 -- # '[' -z 4117952 ']' 00:06:16.259 02:25:49 -- common/autotest_common.sh@940 -- # kill -0 4117952 00:06:16.259 02:25:49 -- common/autotest_common.sh@941 -- # uname 00:06:16.260 02:25:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.260 02:25:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4117952 00:06:16.260 02:25:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.260 02:25:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.260 02:25:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4117952' 00:06:16.260 killing process with pid 4117952 00:06:16.260 02:25:49 -- common/autotest_common.sh@955 -- # kill 4117952 00:06:16.260 02:25:49 -- common/autotest_common.sh@960 -- # wait 4117952 00:06:16.520 00:06:16.520 real 0m1.592s 00:06:16.520 user 0m1.698s 00:06:16.520 sys 0m0.525s 00:06:16.520 02:25:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.520 02:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.520 ************************************ 00:06:16.520 END TEST default_locks_via_rpc 00:06:16.520 ************************************ 00:06:16.520 02:25:50 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.520 02:25:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.520 02:25:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.520 02:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.781 ************************************ 00:06:16.781 START TEST non_locking_app_on_locked_coremask 00:06:16.781 ************************************ 00:06:16.781 02:25:50 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:16.781 02:25:50 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4118366 00:06:16.781 02:25:50 -- event/cpu_locks.sh@81 -- # waitforlisten 4118366 /var/tmp/spdk.sock 00:06:16.781 02:25:50 -- common/autotest_common.sh@817 -- # '[' -z 4118366 ']' 00:06:16.781 02:25:50 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.781 02:25:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.781 02:25:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.781 02:25:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.781 02:25:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.781 02:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.781 [2024-04-27 02:25:50.245408] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:16.781 [2024-04-27 02:25:50.245459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4118366 ] 00:06:16.781 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.781 [2024-04-27 02:25:50.307044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.781 [2024-04-27 02:25:50.372093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.727 02:25:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.727 02:25:51 -- common/autotest_common.sh@850 -- # return 0 00:06:17.727 02:25:51 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.727 02:25:51 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4118636 00:06:17.727 02:25:51 -- event/cpu_locks.sh@85 -- # waitforlisten 4118636 /var/tmp/spdk2.sock 00:06:17.727 02:25:51 -- common/autotest_common.sh@817 -- # '[' -z 4118636 ']' 00:06:17.727 02:25:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.727 02:25:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:17.727 02:25:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.727 02:25:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:17.727 02:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.727 [2024-04-27 02:25:51.041800] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:17.727 [2024-04-27 02:25:51.041848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4118636 ] 00:06:17.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.727 [2024-04-27 02:25:51.130095] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.727 [2024-04-27 02:25:51.130125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.727 [2024-04-27 02:25:51.257638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.300 02:25:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.300 02:25:51 -- common/autotest_common.sh@850 -- # return 0 00:06:18.300 02:25:51 -- event/cpu_locks.sh@87 -- # locks_exist 4118366 00:06:18.300 02:25:51 -- event/cpu_locks.sh@22 -- # lslocks -p 4118366 00:06:18.300 02:25:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.872 lslocks: write error 00:06:18.872 02:25:52 -- event/cpu_locks.sh@89 -- # killprocess 4118366 00:06:18.872 02:25:52 -- common/autotest_common.sh@936 -- # '[' -z 4118366 ']' 00:06:18.872 02:25:52 -- common/autotest_common.sh@940 -- # kill -0 4118366 00:06:18.872 02:25:52 -- common/autotest_common.sh@941 -- # uname 00:06:18.872 02:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.872 02:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4118366 00:06:18.872 02:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.872 02:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.872 02:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4118366' 00:06:18.872 killing process with pid 4118366 00:06:18.872 02:25:52 -- common/autotest_common.sh@955 -- # kill 4118366 00:06:18.872 02:25:52 -- common/autotest_common.sh@960 -- # wait 4118366 00:06:19.459 02:25:52 -- event/cpu_locks.sh@90 -- # killprocess 4118636 00:06:19.459 02:25:52 -- common/autotest_common.sh@936 -- # '[' -z 4118636 ']' 00:06:19.459 02:25:52 -- common/autotest_common.sh@940 -- # kill -0 4118636 00:06:19.459 02:25:52 -- common/autotest_common.sh@941 -- # uname 00:06:19.459 02:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.459 02:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4118636 00:06:19.459 02:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.459 02:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.459 02:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4118636' 00:06:19.459 killing process with pid 4118636 00:06:19.459 02:25:52 -- common/autotest_common.sh@955 -- # kill 4118636 00:06:19.459 02:25:52 -- common/autotest_common.sh@960 -- # wait 4118636 00:06:19.459 00:06:19.459 real 0m2.843s 00:06:19.459 user 0m3.095s 00:06:19.459 sys 0m0.826s 00:06:19.459 02:25:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.459 02:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.459 ************************************ 00:06:19.459 END TEST non_locking_app_on_locked_coremask 00:06:19.459 ************************************ 00:06:19.459 02:25:53 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.459 02:25:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.459 02:25:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.459 02:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.722 ************************************ 00:06:19.722 START TEST locking_app_on_unlocked_coremask 00:06:19.722 ************************************ 00:06:19.722 02:25:53 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:19.722 02:25:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4119020 00:06:19.722 02:25:53 -- event/cpu_locks.sh@99 -- # waitforlisten 4119020 /var/tmp/spdk.sock 00:06:19.722 02:25:53 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.722 02:25:53 -- common/autotest_common.sh@817 -- # '[' -z 4119020 ']' 00:06:19.722 02:25:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.722 02:25:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.722 02:25:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.722 02:25:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.722 02:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.722 [2024-04-27 02:25:53.265139] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:19.722 [2024-04-27 02:25:53.265184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119020 ] 00:06:19.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.722 [2024-04-27 02:25:53.323887] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.722 [2024-04-27 02:25:53.323916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.982 [2024-04-27 02:25:53.386531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.553 02:25:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:20.553 02:25:54 -- common/autotest_common.sh@850 -- # return 0 00:06:20.553 02:25:54 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4119307 00:06:20.553 02:25:54 -- event/cpu_locks.sh@103 -- # waitforlisten 4119307 /var/tmp/spdk2.sock 00:06:20.553 02:25:54 -- common/autotest_common.sh@817 -- # '[' -z 4119307 ']' 00:06:20.553 02:25:54 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.553 02:25:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.553 02:25:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.553 02:25:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.553 02:25:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.553 02:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.553 [2024-04-27 02:25:54.074394] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:20.553 [2024-04-27 02:25:54.074447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119307 ] 00:06:20.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.553 [2024-04-27 02:25:54.167326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.813 [2024-04-27 02:25:54.290853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.386 02:25:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.386 02:25:54 -- common/autotest_common.sh@850 -- # return 0 00:06:21.386 02:25:54 -- event/cpu_locks.sh@105 -- # locks_exist 4119307 00:06:21.386 02:25:54 -- event/cpu_locks.sh@22 -- # lslocks -p 4119307 00:06:21.386 02:25:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.958 lslocks: write error 00:06:21.958 02:25:55 -- event/cpu_locks.sh@107 -- # killprocess 4119020 00:06:21.958 02:25:55 -- common/autotest_common.sh@936 -- # '[' -z 4119020 ']' 00:06:21.958 02:25:55 -- common/autotest_common.sh@940 -- # kill -0 4119020 00:06:21.958 02:25:55 -- common/autotest_common.sh@941 -- # uname 00:06:21.958 02:25:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.958 02:25:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4119020 00:06:21.958 02:25:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.958 02:25:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.958 02:25:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4119020' 00:06:21.958 killing process with pid 4119020 00:06:21.958 02:25:55 -- common/autotest_common.sh@955 -- # kill 4119020 00:06:21.958 02:25:55 -- common/autotest_common.sh@960 -- # wait 4119020 00:06:22.530 02:25:55 -- event/cpu_locks.sh@108 -- # killprocess 4119307 00:06:22.530 02:25:55 -- common/autotest_common.sh@936 -- # '[' -z 4119307 ']' 00:06:22.530 02:25:55 -- common/autotest_common.sh@940 -- # kill -0 4119307 00:06:22.530 02:25:55 -- common/autotest_common.sh@941 -- # uname 00:06:22.530 02:25:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.530 02:25:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4119307 00:06:22.530 02:25:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.530 02:25:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.530 02:25:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4119307' 00:06:22.530 killing process with pid 4119307 00:06:22.530 02:25:55 -- common/autotest_common.sh@955 -- # kill 4119307 00:06:22.530 02:25:55 -- common/autotest_common.sh@960 -- # wait 4119307 00:06:22.530 00:06:22.530 real 0m2.925s 00:06:22.530 user 0m3.184s 00:06:22.530 sys 0m0.876s 00:06:22.530 02:25:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.530 02:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.530 ************************************ 00:06:22.530 END TEST locking_app_on_unlocked_coremask 00:06:22.530 ************************************ 00:06:22.791 02:25:56 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:22.791 02:25:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.791 02:25:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.791 02:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.791 ************************************ 00:06:22.791 START TEST locking_app_on_locked_coremask 00:06:22.791 ************************************ 00:06:22.791 02:25:56 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:22.791 02:25:56 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4119733 00:06:22.791 02:25:56 -- event/cpu_locks.sh@116 -- # waitforlisten 4119733 /var/tmp/spdk.sock 00:06:22.791 02:25:56 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.791 02:25:56 -- common/autotest_common.sh@817 -- # '[' -z 4119733 ']' 00:06:22.791 02:25:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.791 02:25:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.791 02:25:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.791 02:25:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.791 02:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.791 [2024-04-27 02:25:56.376273] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:22.791 [2024-04-27 02:25:56.376340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119733 ] 00:06:22.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.078 [2024-04-27 02:25:56.441985] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.078 [2024-04-27 02:25:56.514355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.654 02:25:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.654 02:25:57 -- common/autotest_common.sh@850 -- # return 0 00:06:23.654 02:25:57 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4119886 00:06:23.655 02:25:57 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4119886 /var/tmp/spdk2.sock 00:06:23.655 02:25:57 -- common/autotest_common.sh@638 -- # local es=0 00:06:23.655 02:25:57 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.655 02:25:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 4119886 /var/tmp/spdk2.sock 00:06:23.655 02:25:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:23.655 02:25:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:23.655 02:25:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:23.655 02:25:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:23.655 02:25:57 -- common/autotest_common.sh@641 -- # waitforlisten 4119886 /var/tmp/spdk2.sock 00:06:23.655 02:25:57 -- common/autotest_common.sh@817 -- # '[' -z 4119886 ']' 00:06:23.655 02:25:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.655 02:25:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.655 02:25:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.655 02:25:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.655 02:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.655 [2024-04-27 02:25:57.186311] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:23.655 [2024-04-27 02:25:57.186362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119886 ] 00:06:23.655 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.655 [2024-04-27 02:25:57.274581] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4119733 has claimed it. 00:06:23.655 [2024-04-27 02:25:57.274620] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (4119886) - No such process 00:06:24.227 ERROR: process (pid: 4119886) is no longer running 00:06:24.227 02:25:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.227 02:25:57 -- common/autotest_common.sh@850 -- # return 1 00:06:24.227 02:25:57 -- common/autotest_common.sh@641 -- # es=1 00:06:24.227 02:25:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:24.227 02:25:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:24.227 02:25:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:24.227 02:25:57 -- event/cpu_locks.sh@122 -- # locks_exist 4119733 00:06:24.227 02:25:57 -- event/cpu_locks.sh@22 -- # lslocks -p 4119733 00:06:24.227 02:25:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.799 lslocks: write error 00:06:24.799 02:25:58 -- event/cpu_locks.sh@124 -- # killprocess 4119733 00:06:24.799 02:25:58 -- common/autotest_common.sh@936 -- # '[' -z 4119733 ']' 00:06:24.799 02:25:58 -- common/autotest_common.sh@940 -- # kill -0 4119733 00:06:24.799 02:25:58 -- common/autotest_common.sh@941 -- # uname 00:06:24.799 02:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.799 02:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4119733 00:06:24.799 02:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.799 02:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.799 02:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4119733' 00:06:24.799 killing process with pid 4119733 00:06:24.799 02:25:58 -- common/autotest_common.sh@955 -- # kill 4119733 00:06:24.799 02:25:58 -- common/autotest_common.sh@960 -- # wait 4119733 00:06:25.061 00:06:25.061 real 0m2.159s 00:06:25.061 user 0m2.383s 00:06:25.061 sys 0m0.606s 00:06:25.061 02:25:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.061 02:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.061 ************************************ 00:06:25.061 END TEST locking_app_on_locked_coremask 00:06:25.061 ************************************ 00:06:25.061 02:25:58 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.061 02:25:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.061 02:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.061 02:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.061 ************************************ 00:06:25.061 START TEST locking_overlapped_coremask 00:06:25.061 ************************************ 00:06:25.061 02:25:58 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:25.061 02:25:58 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4120146 00:06:25.061 02:25:58 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.061 02:25:58 -- event/cpu_locks.sh@133 -- # waitforlisten 4120146 /var/tmp/spdk.sock 00:06:25.061 02:25:58 -- common/autotest_common.sh@817 -- # '[' -z 4120146 ']' 00:06:25.061 02:25:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.061 02:25:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.061 02:25:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.061 02:25:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.061 02:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.322 [2024-04-27 02:25:58.708833] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:25.323 [2024-04-27 02:25:58.708888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120146 ] 00:06:25.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.323 [2024-04-27 02:25:58.772439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.323 [2024-04-27 02:25:58.846813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.323 [2024-04-27 02:25:58.846931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.323 [2024-04-27 02:25:58.846934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.895 02:25:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:25.895 02:25:59 -- common/autotest_common.sh@850 -- # return 0 00:06:25.895 02:25:59 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4120454 00:06:25.895 02:25:59 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4120454 /var/tmp/spdk2.sock 00:06:25.895 02:25:59 -- common/autotest_common.sh@638 -- # local es=0 00:06:25.895 02:25:59 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:25.895 02:25:59 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 4120454 /var/tmp/spdk2.sock 00:06:25.895 02:25:59 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:25.895 02:25:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:25.895 02:25:59 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:25.895 02:25:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:25.895 02:25:59 -- common/autotest_common.sh@641 -- # waitforlisten 4120454 /var/tmp/spdk2.sock 00:06:25.895 02:25:59 -- common/autotest_common.sh@817 -- # '[' -z 4120454 ']' 00:06:25.895 02:25:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.895 02:25:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.895 02:25:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.895 02:25:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.895 02:25:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.156 [2024-04-27 02:25:59.544616] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:26.156 [2024-04-27 02:25:59.544668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120454 ] 00:06:26.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.156 [2024-04-27 02:25:59.615742] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4120146 has claimed it. 00:06:26.156 [2024-04-27 02:25:59.615773] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (4120454) - No such process 00:06:26.728 ERROR: process (pid: 4120454) is no longer running 00:06:26.728 02:26:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.728 02:26:00 -- common/autotest_common.sh@850 -- # return 1 00:06:26.728 02:26:00 -- common/autotest_common.sh@641 -- # es=1 00:06:26.728 02:26:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:26.728 02:26:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:26.728 02:26:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:26.728 02:26:00 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:26.728 02:26:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.728 02:26:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.728 02:26:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.728 02:26:00 -- event/cpu_locks.sh@141 -- # killprocess 4120146 00:06:26.728 02:26:00 -- common/autotest_common.sh@936 -- # '[' -z 4120146 ']' 00:06:26.728 02:26:00 -- common/autotest_common.sh@940 -- # kill -0 4120146 00:06:26.728 02:26:00 -- common/autotest_common.sh@941 -- # uname 00:06:26.728 02:26:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.728 02:26:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4120146 00:06:26.728 02:26:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.728 02:26:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.728 02:26:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4120146' 00:06:26.728 killing process with pid 4120146 00:06:26.729 02:26:00 -- common/autotest_common.sh@955 -- # kill 4120146 00:06:26.729 02:26:00 -- common/autotest_common.sh@960 -- # wait 4120146 00:06:26.990 00:06:26.990 real 0m1.763s 00:06:26.990 user 0m4.961s 00:06:26.990 sys 0m0.396s 00:06:26.990 02:26:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.990 02:26:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.990 ************************************ 00:06:26.990 END TEST locking_overlapped_coremask 00:06:26.990 ************************************ 00:06:26.990 02:26:00 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:26.990 02:26:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.990 02:26:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.990 02:26:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.990 ************************************ 00:06:26.990 START TEST locking_overlapped_coremask_via_rpc 00:06:26.990 ************************************ 00:06:26.990 02:26:00 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:26.990 02:26:00 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4120646 00:06:26.990 02:26:00 -- event/cpu_locks.sh@149 -- # waitforlisten 4120646 /var/tmp/spdk.sock 00:06:26.990 02:26:00 -- common/autotest_common.sh@817 -- # '[' -z 4120646 ']' 00:06:26.990 02:26:00 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:26.990 02:26:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.990 02:26:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.990 02:26:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.990 02:26:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.990 02:26:00 -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 [2024-04-27 02:26:00.646387] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:27.251 [2024-04-27 02:26:00.646439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120646 ] 00:06:27.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.251 [2024-04-27 02:26:00.706553] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.251 [2024-04-27 02:26:00.706581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.251 [2024-04-27 02:26:00.773519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.251 [2024-04-27 02:26:00.773637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.251 [2024-04-27 02:26:00.773640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.824 02:26:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.824 02:26:01 -- common/autotest_common.sh@850 -- # return 0 00:06:27.824 02:26:01 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4120837 00:06:27.824 02:26:01 -- event/cpu_locks.sh@153 -- # waitforlisten 4120837 /var/tmp/spdk2.sock 00:06:27.824 02:26:01 -- common/autotest_common.sh@817 -- # '[' -z 4120837 ']' 00:06:27.824 02:26:01 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:27.824 02:26:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.824 02:26:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.824 02:26:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.824 02:26:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.824 02:26:01 -- common/autotest_common.sh@10 -- # set +x 00:06:28.085 [2024-04-27 02:26:01.471784] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:28.085 [2024-04-27 02:26:01.471837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120837 ] 00:06:28.085 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.085 [2024-04-27 02:26:01.542678] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.085 [2024-04-27 02:26:01.542700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.085 [2024-04-27 02:26:01.646490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.085 [2024-04-27 02:26:01.650398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.085 [2024-04-27 02:26:01.650400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:28.658 02:26:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.658 02:26:02 -- common/autotest_common.sh@850 -- # return 0 00:06:28.658 02:26:02 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.658 02:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.658 02:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.658 02:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.658 02:26:02 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.658 02:26:02 -- common/autotest_common.sh@638 -- # local es=0 00:06:28.658 02:26:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.658 02:26:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:28.658 02:26:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.658 02:26:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:28.658 02:26:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:28.658 02:26:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.658 02:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.658 02:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.658 [2024-04-27 02:26:02.238338] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4120646 has claimed it. 00:06:28.658 request: 00:06:28.658 { 00:06:28.658 "method": "framework_enable_cpumask_locks", 00:06:28.658 "req_id": 1 00:06:28.658 } 00:06:28.658 Got JSON-RPC error response 00:06:28.658 response: 00:06:28.658 { 00:06:28.658 "code": -32603, 00:06:28.658 "message": "Failed to claim CPU core: 2" 00:06:28.658 } 00:06:28.658 02:26:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:28.658 02:26:02 -- common/autotest_common.sh@641 -- # es=1 00:06:28.658 02:26:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:28.658 02:26:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:28.658 02:26:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:28.658 02:26:02 -- event/cpu_locks.sh@158 -- # waitforlisten 4120646 /var/tmp/spdk.sock 00:06:28.658 02:26:02 -- common/autotest_common.sh@817 -- # '[' -z 4120646 ']' 00:06:28.658 02:26:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.658 02:26:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.658 02:26:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.658 02:26:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.658 02:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.919 02:26:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.919 02:26:02 -- common/autotest_common.sh@850 -- # return 0 00:06:28.919 02:26:02 -- event/cpu_locks.sh@159 -- # waitforlisten 4120837 /var/tmp/spdk2.sock 00:06:28.919 02:26:02 -- common/autotest_common.sh@817 -- # '[' -z 4120837 ']' 00:06:28.919 02:26:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.919 02:26:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.919 02:26:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.919 02:26:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.919 02:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 02:26:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.181 02:26:02 -- common/autotest_common.sh@850 -- # return 0 00:06:29.181 02:26:02 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:29.181 02:26:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.181 02:26:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.181 02:26:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.181 00:06:29.181 real 0m1.996s 00:06:29.181 user 0m0.772s 00:06:29.181 sys 0m0.149s 00:06:29.181 02:26:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.181 02:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 ************************************ 00:06:29.181 END TEST locking_overlapped_coremask_via_rpc 00:06:29.181 ************************************ 00:06:29.181 02:26:02 -- event/cpu_locks.sh@174 -- # cleanup 00:06:29.181 02:26:02 -- event/cpu_locks.sh@15 -- # [[ -z 4120646 ]] 00:06:29.181 02:26:02 -- event/cpu_locks.sh@15 -- # killprocess 4120646 00:06:29.181 02:26:02 -- common/autotest_common.sh@936 -- # '[' -z 4120646 ']' 00:06:29.181 02:26:02 -- common/autotest_common.sh@940 -- # kill -0 4120646 00:06:29.181 02:26:02 -- common/autotest_common.sh@941 -- # uname 00:06:29.181 02:26:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.181 02:26:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4120646 00:06:29.181 02:26:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.181 02:26:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.181 02:26:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4120646' 00:06:29.181 killing process with pid 4120646 00:06:29.181 02:26:02 -- common/autotest_common.sh@955 -- # kill 4120646 00:06:29.181 02:26:02 -- common/autotest_common.sh@960 -- # wait 4120646 00:06:29.443 02:26:02 -- event/cpu_locks.sh@16 -- # [[ -z 4120837 ]] 00:06:29.443 02:26:02 -- event/cpu_locks.sh@16 -- # killprocess 4120837 00:06:29.443 02:26:02 -- common/autotest_common.sh@936 -- # '[' -z 4120837 ']' 00:06:29.443 02:26:02 -- common/autotest_common.sh@940 -- # kill -0 4120837 00:06:29.443 02:26:02 -- common/autotest_common.sh@941 -- # uname 00:06:29.443 02:26:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.443 02:26:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4120837 00:06:29.443 02:26:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:29.443 02:26:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:29.443 02:26:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4120837' 00:06:29.443 killing process with pid 4120837 00:06:29.443 02:26:02 -- common/autotest_common.sh@955 -- # kill 4120837 00:06:29.443 02:26:02 -- common/autotest_common.sh@960 -- # wait 4120837 00:06:29.703 02:26:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.703 02:26:03 -- event/cpu_locks.sh@1 -- # cleanup 00:06:29.703 02:26:03 -- event/cpu_locks.sh@15 -- # [[ -z 4120646 ]] 00:06:29.703 02:26:03 -- event/cpu_locks.sh@15 -- # killprocess 4120646 00:06:29.703 02:26:03 -- common/autotest_common.sh@936 -- # '[' -z 4120646 ']' 00:06:29.703 02:26:03 -- common/autotest_common.sh@940 -- # kill -0 4120646 00:06:29.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4120646) - No such process 00:06:29.703 02:26:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4120646 is not found' 00:06:29.703 Process with pid 4120646 is not found 00:06:29.703 02:26:03 -- event/cpu_locks.sh@16 -- # [[ -z 4120837 ]] 00:06:29.703 02:26:03 -- event/cpu_locks.sh@16 -- # killprocess 4120837 00:06:29.703 02:26:03 -- common/autotest_common.sh@936 -- # '[' -z 4120837 ']' 00:06:29.703 02:26:03 -- common/autotest_common.sh@940 -- # kill -0 4120837 00:06:29.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4120837) - No such process 00:06:29.703 02:26:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4120837 is not found' 00:06:29.703 Process with pid 4120837 is not found 00:06:29.703 02:26:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.703 00:06:29.703 real 0m16.304s 00:06:29.703 user 0m27.215s 00:06:29.703 sys 0m4.860s 00:06:29.703 02:26:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.703 02:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.703 ************************************ 00:06:29.703 END TEST cpu_locks 00:06:29.703 ************************************ 00:06:29.703 00:06:29.703 real 0m43.319s 00:06:29.703 user 1m22.052s 00:06:29.703 sys 0m8.181s 00:06:29.703 02:26:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.703 02:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.703 ************************************ 00:06:29.703 END TEST event 00:06:29.703 ************************************ 00:06:29.703 02:26:03 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:29.703 02:26:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.703 02:26:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.703 02:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.963 ************************************ 00:06:29.963 START TEST thread 00:06:29.963 ************************************ 00:06:29.963 02:26:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:29.963 * Looking for test storage... 00:06:29.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:29.963 02:26:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:29.963 02:26:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:29.963 02:26:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.963 02:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:29.963 ************************************ 00:06:29.963 START TEST thread_poller_perf 00:06:29.963 ************************************ 00:06:29.963 02:26:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.224 [2024-04-27 02:26:03.594667] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:30.224 [2024-04-27 02:26:03.594778] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4121293 ] 00:06:30.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.224 [2024-04-27 02:26:03.661295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.224 [2024-04-27 02:26:03.733743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.224 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:31.609 ====================================== 00:06:31.609 busy:2414333268 (cyc) 00:06:31.609 total_run_count: 288000 00:06:31.609 tsc_hz: 2400000000 (cyc) 00:06:31.609 ====================================== 00:06:31.609 poller_cost: 8383 (cyc), 3492 (nsec) 00:06:31.609 00:06:31.609 real 0m1.223s 00:06:31.609 user 0m1.142s 00:06:31.609 sys 0m0.077s 00:06:31.609 02:26:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.609 02:26:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.609 ************************************ 00:06:31.609 END TEST thread_poller_perf 00:06:31.609 ************************************ 00:06:31.609 02:26:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.609 02:26:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:31.609 02:26:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.609 02:26:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.609 ************************************ 00:06:31.609 START TEST thread_poller_perf 00:06:31.609 ************************************ 00:06:31.609 02:26:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.609 [2024-04-27 02:26:05.001142] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:31.609 [2024-04-27 02:26:05.001249] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4121654 ] 00:06:31.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.609 [2024-04-27 02:26:05.067155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.609 [2024-04-27 02:26:05.138008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.609 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.996 ====================================== 00:06:32.996 busy:2401924724 (cyc) 00:06:32.996 total_run_count: 3805000 00:06:32.996 tsc_hz: 2400000000 (cyc) 00:06:32.996 ====================================== 00:06:32.996 poller_cost: 631 (cyc), 262 (nsec) 00:06:32.996 00:06:32.996 real 0m1.211s 00:06:32.996 user 0m1.131s 00:06:32.996 sys 0m0.076s 00:06:32.996 02:26:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.996 02:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.996 ************************************ 00:06:32.996 END TEST thread_poller_perf 00:06:32.996 ************************************ 00:06:32.996 02:26:06 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:32.996 00:06:32.996 real 0m2.901s 00:06:32.996 user 0m2.449s 00:06:32.996 sys 0m0.419s 00:06:32.996 02:26:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.996 02:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.996 ************************************ 00:06:32.996 END TEST thread 00:06:32.996 ************************************ 00:06:32.996 02:26:06 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:32.996 02:26:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.996 02:26:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.996 02:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.996 ************************************ 00:06:32.996 START TEST accel 00:06:32.996 ************************************ 00:06:32.996 02:26:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:32.996 * Looking for test storage... 00:06:32.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:32.996 02:26:06 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:32.996 02:26:06 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:32.996 02:26:06 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.996 02:26:06 -- accel/accel.sh@62 -- # spdk_tgt_pid=4122050 00:06:32.996 02:26:06 -- accel/accel.sh@63 -- # waitforlisten 4122050 00:06:32.996 02:26:06 -- common/autotest_common.sh@817 -- # '[' -z 4122050 ']' 00:06:32.996 02:26:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.996 02:26:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:32.996 02:26:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.996 02:26:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:32.996 02:26:06 -- common/autotest_common.sh@10 -- # set +x 00:06:32.996 02:26:06 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:32.996 02:26:06 -- accel/accel.sh@61 -- # build_accel_config 00:06:32.996 02:26:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.996 02:26:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.996 02:26:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.996 02:26:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.996 02:26:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.996 02:26:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.996 02:26:06 -- accel/accel.sh@41 -- # jq -r . 00:06:32.996 [2024-04-27 02:26:06.509092] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:32.996 [2024-04-27 02:26:06.509161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122050 ] 00:06:32.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.996 [2024-04-27 02:26:06.572464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.257 [2024-04-27 02:26:06.644215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.830 02:26:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:33.830 02:26:07 -- common/autotest_common.sh@850 -- # return 0 00:06:33.830 02:26:07 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:33.830 02:26:07 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:33.830 02:26:07 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:33.830 02:26:07 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:33.830 02:26:07 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:33.830 02:26:07 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:33.830 02:26:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:33.830 02:26:07 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:33.830 02:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.830 02:26:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:33.830 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.830 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.830 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.830 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.830 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.830 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.830 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.830 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.830 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.830 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.830 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # IFS== 00:06:33.831 02:26:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:33.831 02:26:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.831 02:26:07 -- accel/accel.sh@75 -- # killprocess 4122050 00:06:33.831 02:26:07 -- common/autotest_common.sh@936 -- # '[' -z 4122050 ']' 00:06:33.831 02:26:07 -- common/autotest_common.sh@940 -- # kill -0 4122050 00:06:33.831 02:26:07 -- common/autotest_common.sh@941 -- # uname 00:06:33.831 02:26:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.831 02:26:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4122050 00:06:33.831 02:26:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.831 02:26:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.831 02:26:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4122050' 00:06:33.831 killing process with pid 4122050 00:06:33.831 02:26:07 -- common/autotest_common.sh@955 -- # kill 4122050 00:06:33.831 02:26:07 -- common/autotest_common.sh@960 -- # wait 4122050 00:06:34.093 02:26:07 -- accel/accel.sh@76 -- # trap - ERR 00:06:34.093 02:26:07 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:34.093 02:26:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:34.093 02:26:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.093 02:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:34.355 02:26:07 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:34.355 02:26:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.355 02:26:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.355 02:26:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.355 02:26:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.355 02:26:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.355 02:26:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.355 02:26:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.355 02:26:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.355 02:26:07 -- accel/accel.sh@41 -- # jq -r . 00:06:34.355 02:26:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.355 02:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:34.355 02:26:07 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.355 02:26:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:34.355 02:26:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.355 02:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:34.355 ************************************ 00:06:34.355 START TEST accel_missing_filename 00:06:34.355 ************************************ 00:06:34.355 02:26:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:34.355 02:26:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:34.355 02:26:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.355 02:26:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:34.355 02:26:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.355 02:26:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:34.355 02:26:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.355 02:26:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:34.355 02:26:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.355 02:26:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.355 02:26:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.355 02:26:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.355 02:26:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.355 02:26:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.355 02:26:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.355 02:26:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.355 02:26:07 -- accel/accel.sh@41 -- # jq -r . 00:06:34.617 [2024-04-27 02:26:07.985987] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:34.617 [2024-04-27 02:26:07.986092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122438 ] 00:06:34.617 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.617 [2024-04-27 02:26:08.051239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.617 [2024-04-27 02:26:08.122402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.617 [2024-04-27 02:26:08.154717] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.617 [2024-04-27 02:26:08.191688] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:34.877 A filename is required. 00:06:34.877 02:26:08 -- common/autotest_common.sh@641 -- # es=234 00:06:34.877 02:26:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:34.877 02:26:08 -- common/autotest_common.sh@650 -- # es=106 00:06:34.877 02:26:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:34.877 02:26:08 -- common/autotest_common.sh@658 -- # es=1 00:06:34.877 02:26:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:34.877 00:06:34.877 real 0m0.286s 00:06:34.877 user 0m0.221s 00:06:34.877 sys 0m0.106s 00:06:34.877 02:26:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.877 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:34.877 ************************************ 00:06:34.877 END TEST accel_missing_filename 00:06:34.877 ************************************ 00:06:34.877 02:26:08 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.877 02:26:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:34.877 02:26:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.877 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:34.877 ************************************ 00:06:34.877 START TEST accel_compress_verify 00:06:34.878 ************************************ 00:06:34.878 02:26:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.878 02:26:08 -- common/autotest_common.sh@638 -- # local es=0 00:06:34.878 02:26:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.878 02:26:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:34.878 02:26:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.878 02:26:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:34.878 02:26:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.878 02:26:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.878 02:26:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.878 02:26:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.878 02:26:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.878 02:26:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.878 02:26:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.878 02:26:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.878 02:26:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.878 02:26:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.878 02:26:08 -- accel/accel.sh@41 -- # jq -r . 00:06:34.878 [2024-04-27 02:26:08.439293] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:34.878 [2024-04-27 02:26:08.439360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122466 ] 00:06:34.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.139 [2024-04-27 02:26:08.500004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.139 [2024-04-27 02:26:08.561815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.139 [2024-04-27 02:26:08.593556] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.139 [2024-04-27 02:26:08.630306] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:35.139 00:06:35.139 Compression does not support the verify option, aborting. 00:06:35.139 02:26:08 -- common/autotest_common.sh@641 -- # es=161 00:06:35.139 02:26:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:35.139 02:26:08 -- common/autotest_common.sh@650 -- # es=33 00:06:35.139 02:26:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:35.139 02:26:08 -- common/autotest_common.sh@658 -- # es=1 00:06:35.139 02:26:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:35.139 00:06:35.139 real 0m0.269s 00:06:35.139 user 0m0.213s 00:06:35.139 sys 0m0.096s 00:06:35.139 02:26:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.139 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.139 ************************************ 00:06:35.139 END TEST accel_compress_verify 00:06:35.139 ************************************ 00:06:35.139 02:26:08 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.139 02:26:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.139 02:26:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.139 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.401 ************************************ 00:06:35.401 START TEST accel_wrong_workload 00:06:35.401 ************************************ 00:06:35.401 02:26:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:35.401 02:26:08 -- common/autotest_common.sh@638 -- # local es=0 00:06:35.401 02:26:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.401 02:26:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:35.401 02:26:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.401 02:26:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:35.401 02:26:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.401 02:26:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:35.401 02:26:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.401 02:26:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.401 02:26:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.401 02:26:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.401 02:26:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.401 02:26:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.401 02:26:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.401 02:26:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.401 02:26:08 -- accel/accel.sh@41 -- # jq -r . 00:06:35.401 Unsupported workload type: foobar 00:06:35.401 [2024-04-27 02:26:08.885922] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:35.401 accel_perf options: 00:06:35.401 [-h help message] 00:06:35.401 [-q queue depth per core] 00:06:35.401 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.401 [-T number of threads per core 00:06:35.401 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.401 [-t time in seconds] 00:06:35.401 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.401 [ dif_verify, , dif_generate, dif_generate_copy 00:06:35.401 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.401 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.401 [-S for crc32c workload, use this seed value (default 0) 00:06:35.401 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.401 [-f for fill workload, use this BYTE value (default 255) 00:06:35.401 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.401 [-y verify result if this switch is on] 00:06:35.401 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.401 Can be used to spread operations across a wider range of memory. 00:06:35.401 02:26:08 -- common/autotest_common.sh@641 -- # es=1 00:06:35.401 02:26:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:35.401 02:26:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:35.401 02:26:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:35.401 00:06:35.401 real 0m0.035s 00:06:35.401 user 0m0.020s 00:06:35.401 sys 0m0.015s 00:06:35.401 02:26:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.401 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.401 ************************************ 00:06:35.401 END TEST accel_wrong_workload 00:06:35.401 ************************************ 00:06:35.401 Error: writing output failed: Broken pipe 00:06:35.401 02:26:08 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.401 02:26:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:35.401 02:26:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.401 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.663 ************************************ 00:06:35.663 START TEST accel_negative_buffers 00:06:35.663 ************************************ 00:06:35.663 02:26:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.663 02:26:09 -- common/autotest_common.sh@638 -- # local es=0 00:06:35.663 02:26:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:35.663 02:26:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:35.663 02:26:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.663 02:26:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:35.663 02:26:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.663 02:26:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:35.663 02:26:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:35.663 02:26:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.663 02:26:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.663 02:26:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.663 02:26:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.663 02:26:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.663 02:26:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.663 02:26:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.663 02:26:09 -- accel/accel.sh@41 -- # jq -r . 00:06:35.663 -x option must be non-negative. 00:06:35.663 [2024-04-27 02:26:09.110344] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:35.663 accel_perf options: 00:06:35.663 [-h help message] 00:06:35.663 [-q queue depth per core] 00:06:35.663 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.663 [-T number of threads per core 00:06:35.663 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.663 [-t time in seconds] 00:06:35.663 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.663 [ dif_verify, , dif_generate, dif_generate_copy 00:06:35.663 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.663 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.663 [-S for crc32c workload, use this seed value (default 0) 00:06:35.663 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.663 [-f for fill workload, use this BYTE value (default 255) 00:06:35.663 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.663 [-y verify result if this switch is on] 00:06:35.663 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.663 Can be used to spread operations across a wider range of memory. 00:06:35.663 02:26:09 -- common/autotest_common.sh@641 -- # es=1 00:06:35.663 02:26:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:35.663 02:26:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:35.663 02:26:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:35.663 00:06:35.663 real 0m0.036s 00:06:35.663 user 0m0.022s 00:06:35.663 sys 0m0.014s 00:06:35.663 02:26:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.663 02:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.663 ************************************ 00:06:35.663 END TEST accel_negative_buffers 00:06:35.663 ************************************ 00:06:35.663 Error: writing output failed: Broken pipe 00:06:35.663 02:26:09 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:35.663 02:26:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.663 02:26:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.663 02:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.663 ************************************ 00:06:35.663 START TEST accel_crc32c 00:06:35.663 ************************************ 00:06:35.663 02:26:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:35.663 02:26:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.663 02:26:09 -- accel/accel.sh@17 -- # local accel_module 00:06:35.663 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.663 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.663 02:26:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:35.663 02:26:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:35.663 02:26:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.663 02:26:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.663 02:26:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.663 02:26:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.663 02:26:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.663 02:26:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.663 02:26:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.663 02:26:09 -- accel/accel.sh@41 -- # jq -r . 00:06:35.924 [2024-04-27 02:26:09.300259] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:35.924 [2024-04-27 02:26:09.300337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122790 ] 00:06:35.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.924 [2024-04-27 02:26:09.364677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.924 [2024-04-27 02:26:09.438647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=0x1 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=crc32c 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=32 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=software 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=32 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=32 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=1 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val=Yes 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:35.925 02:26:09 -- accel/accel.sh@20 -- # val= 00:06:35.925 02:26:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # IFS=: 00:06:35.925 02:26:09 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.311 02:26:10 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.311 02:26:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.311 00:06:37.311 real 0m1.295s 00:06:37.311 user 0m1.206s 00:06:37.311 sys 0m0.100s 00:06:37.311 02:26:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.311 02:26:10 -- common/autotest_common.sh@10 -- # set +x 00:06:37.311 ************************************ 00:06:37.311 END TEST accel_crc32c 00:06:37.311 ************************************ 00:06:37.311 02:26:10 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:37.311 02:26:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:37.311 02:26:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.311 02:26:10 -- common/autotest_common.sh@10 -- # set +x 00:06:37.311 ************************************ 00:06:37.311 START TEST accel_crc32c_C2 00:06:37.311 ************************************ 00:06:37.311 02:26:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:37.311 02:26:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.311 02:26:10 -- accel/accel.sh@17 -- # local accel_module 00:06:37.311 02:26:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.311 02:26:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.311 02:26:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.311 02:26:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.311 02:26:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.311 02:26:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.311 02:26:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.311 02:26:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.311 02:26:10 -- accel/accel.sh@41 -- # jq -r . 00:06:37.311 [2024-04-27 02:26:10.717906] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:37.311 [2024-04-27 02:26:10.717949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123015 ] 00:06:37.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.311 [2024-04-27 02:26:10.775008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.311 [2024-04-27 02:26:10.837523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=0x1 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=crc32c 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=0 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=software 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=32 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=32 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=1 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val=Yes 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 02:26:10 -- accel/accel.sh@20 -- # val= 00:06:37.311 02:26:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 02:26:10 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:11 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.698 02:26:11 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:38.698 02:26:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.698 00:06:38.698 real 0m1.262s 00:06:38.698 user 0m1.183s 00:06:38.698 sys 0m0.090s 00:06:38.698 02:26:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.698 02:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:38.698 ************************************ 00:06:38.698 END TEST accel_crc32c_C2 00:06:38.698 ************************************ 00:06:38.698 02:26:11 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:38.698 02:26:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.698 02:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.698 02:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:38.698 ************************************ 00:06:38.698 START TEST accel_copy 00:06:38.698 ************************************ 00:06:38.698 02:26:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:38.698 02:26:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.698 02:26:12 -- accel/accel.sh@17 -- # local accel_module 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:38.698 02:26:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:38.698 02:26:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.698 02:26:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.698 02:26:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.698 02:26:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.698 02:26:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.698 02:26:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.698 02:26:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.698 02:26:12 -- accel/accel.sh@41 -- # jq -r . 00:06:38.698 [2024-04-27 02:26:12.132628] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:38.698 [2024-04-27 02:26:12.132688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123277 ] 00:06:38.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.698 [2024-04-27 02:26:12.194329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.698 [2024-04-27 02:26:12.258804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=0x1 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=copy 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=software 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=32 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=32 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=1 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val=Yes 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:38.698 02:26:12 -- accel/accel.sh@20 -- # val= 00:06:38.698 02:26:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # IFS=: 00:06:38.698 02:26:12 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.083 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.083 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.083 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.083 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.083 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.083 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.083 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.083 02:26:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.083 02:26:13 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:40.083 02:26:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.083 00:06:40.083 real 0m1.281s 00:06:40.083 user 0m1.191s 00:06:40.083 sys 0m0.100s 00:06:40.083 02:26:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.083 02:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.083 ************************************ 00:06:40.083 END TEST accel_copy 00:06:40.083 ************************************ 00:06:40.083 02:26:13 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.083 02:26:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:40.083 02:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.083 02:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.083 ************************************ 00:06:40.083 START TEST accel_fill 00:06:40.083 ************************************ 00:06:40.084 02:26:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.084 02:26:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.084 02:26:13 -- accel/accel.sh@17 -- # local accel_module 00:06:40.084 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.084 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.084 02:26:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.084 02:26:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.084 02:26:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.084 02:26:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.084 02:26:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.084 02:26:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.084 02:26:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.084 02:26:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.084 02:26:13 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.084 02:26:13 -- accel/accel.sh@41 -- # jq -r . 00:06:40.084 [2024-04-27 02:26:13.555960] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:40.084 [2024-04-27 02:26:13.556021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123633 ] 00:06:40.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.084 [2024-04-27 02:26:13.615796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.084 [2024-04-27 02:26:13.678680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=0x1 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=fill 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=0x80 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=software 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=64 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=64 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=1 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val=Yes 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:40.344 02:26:13 -- accel/accel.sh@20 -- # val= 00:06:40.344 02:26:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # IFS=: 00:06:40.344 02:26:13 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@20 -- # val= 00:06:41.287 02:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@20 -- # val= 00:06:41.287 02:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@20 -- # val= 00:06:41.287 02:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@20 -- # val= 00:06:41.287 02:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@20 -- # val= 00:06:41.287 02:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@20 -- # val= 00:06:41.287 02:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.287 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.287 02:26:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.287 02:26:14 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:41.287 02:26:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.287 00:06:41.287 real 0m1.274s 00:06:41.287 user 0m1.186s 00:06:41.287 sys 0m0.092s 00:06:41.287 02:26:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.287 02:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:41.287 ************************************ 00:06:41.287 END TEST accel_fill 00:06:41.287 ************************************ 00:06:41.287 02:26:14 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:41.287 02:26:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:41.287 02:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.287 02:26:14 -- common/autotest_common.sh@10 -- # set +x 00:06:41.549 ************************************ 00:06:41.549 START TEST accel_copy_crc32c 00:06:41.549 ************************************ 00:06:41.549 02:26:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:41.549 02:26:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.549 02:26:14 -- accel/accel.sh@17 -- # local accel_module 00:06:41.549 02:26:14 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:14 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:41.549 02:26:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:41.549 02:26:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.549 02:26:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.549 02:26:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.549 02:26:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.549 02:26:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.549 02:26:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.549 02:26:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.549 02:26:14 -- accel/accel.sh@41 -- # jq -r . 00:06:41.549 [2024-04-27 02:26:15.006131] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:41.549 [2024-04-27 02:26:15.006220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123989 ] 00:06:41.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.549 [2024-04-27 02:26:15.067711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.549 [2024-04-27 02:26:15.130044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=0x1 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=0 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=software 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=32 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=32 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=1 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val=Yes 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:41.549 02:26:15 -- accel/accel.sh@20 -- # val= 00:06:41.549 02:26:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # IFS=: 00:06:41.549 02:26:15 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:42.933 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:42.933 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:42.933 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:42.933 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:42.933 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:42.933 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.933 02:26:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:42.933 02:26:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.933 00:06:42.933 real 0m1.279s 00:06:42.933 user 0m1.190s 00:06:42.933 sys 0m0.095s 00:06:42.933 02:26:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.933 02:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.933 ************************************ 00:06:42.933 END TEST accel_copy_crc32c 00:06:42.933 ************************************ 00:06:42.933 02:26:16 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.933 02:26:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:42.933 02:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.933 02:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:42.933 ************************************ 00:06:42.933 START TEST accel_copy_crc32c_C2 00:06:42.933 ************************************ 00:06:42.933 02:26:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.933 02:26:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.933 02:26:16 -- accel/accel.sh@17 -- # local accel_module 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:42.933 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:42.933 02:26:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.933 02:26:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.933 02:26:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.933 02:26:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.933 02:26:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.933 02:26:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.933 02:26:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.933 02:26:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.933 02:26:16 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.933 02:26:16 -- accel/accel.sh@41 -- # jq -r . 00:06:42.933 [2024-04-27 02:26:16.449420] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:42.933 [2024-04-27 02:26:16.449509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124350 ] 00:06:42.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.933 [2024-04-27 02:26:16.511005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.194 [2024-04-27 02:26:16.574410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=0x1 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=0 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=software 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=32 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=32 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=1 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val=Yes 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:43.194 02:26:16 -- accel/accel.sh@20 -- # val= 00:06:43.194 02:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # IFS=: 00:06:43.194 02:26:16 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@20 -- # val= 00:06:44.138 02:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@20 -- # val= 00:06:44.138 02:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@20 -- # val= 00:06:44.138 02:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@20 -- # val= 00:06:44.138 02:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@20 -- # val= 00:06:44.138 02:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@20 -- # val= 00:06:44.138 02:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.138 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.138 02:26:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.138 02:26:17 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:44.138 02:26:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.138 00:06:44.138 real 0m1.279s 00:06:44.138 user 0m1.185s 00:06:44.138 sys 0m0.098s 00:06:44.138 02:26:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.138 02:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.138 ************************************ 00:06:44.138 END TEST accel_copy_crc32c_C2 00:06:44.138 ************************************ 00:06:44.138 02:26:17 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:44.138 02:26:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:44.138 02:26:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.138 02:26:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.399 ************************************ 00:06:44.399 START TEST accel_dualcast 00:06:44.399 ************************************ 00:06:44.399 02:26:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:44.399 02:26:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.399 02:26:17 -- accel/accel.sh@17 -- # local accel_module 00:06:44.399 02:26:17 -- accel/accel.sh@19 -- # IFS=: 00:06:44.399 02:26:17 -- accel/accel.sh@19 -- # read -r var val 00:06:44.399 02:26:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:44.399 02:26:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:44.399 02:26:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.399 02:26:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.399 02:26:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.399 02:26:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.399 02:26:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.399 02:26:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.399 02:26:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.399 02:26:17 -- accel/accel.sh@41 -- # jq -r . 00:06:44.399 [2024-04-27 02:26:17.894103] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:44.399 [2024-04-27 02:26:17.894193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124676 ] 00:06:44.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.399 [2024-04-27 02:26:17.956785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.660 [2024-04-27 02:26:18.022319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=0x1 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=dualcast 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=software 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=32 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=32 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=1 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val=Yes 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:44.660 02:26:18 -- accel/accel.sh@20 -- # val= 00:06:44.660 02:26:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # IFS=: 00:06:44.660 02:26:18 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:45.603 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:45.603 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:45.603 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:45.603 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:45.603 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:45.603 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.603 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.603 02:26:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.603 02:26:19 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:45.603 02:26:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.603 00:06:45.603 real 0m1.282s 00:06:45.604 user 0m1.190s 00:06:45.604 sys 0m0.096s 00:06:45.604 02:26:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.604 02:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:45.604 ************************************ 00:06:45.604 END TEST accel_dualcast 00:06:45.604 ************************************ 00:06:45.604 02:26:19 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:45.604 02:26:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:45.604 02:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.604 02:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:45.865 ************************************ 00:06:45.865 START TEST accel_compare 00:06:45.865 ************************************ 00:06:45.865 02:26:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:45.865 02:26:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.865 02:26:19 -- accel/accel.sh@17 -- # local accel_module 00:06:45.865 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:45.865 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:45.865 02:26:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:45.865 02:26:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:45.865 02:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.865 02:26:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.865 02:26:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.865 02:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.865 02:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.865 02:26:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.865 02:26:19 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.865 02:26:19 -- accel/accel.sh@41 -- # jq -r . 00:06:45.865 [2024-04-27 02:26:19.329662] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:45.865 [2024-04-27 02:26:19.329729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4124905 ] 00:06:45.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.865 [2024-04-27 02:26:19.392999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.865 [2024-04-27 02:26:19.463164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=0x1 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=compare 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=software 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=32 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=32 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=1 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val=Yes 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:46.128 02:26:19 -- accel/accel.sh@20 -- # val= 00:06:46.128 02:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # IFS=: 00:06:46.128 02:26:19 -- accel/accel.sh@19 -- # read -r var val 00:06:47.069 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.069 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.069 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.069 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.069 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.069 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.070 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.070 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.070 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.070 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.070 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.070 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.070 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.070 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.070 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.070 02:26:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.070 02:26:20 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:47.070 02:26:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.070 00:06:47.070 real 0m1.282s 00:06:47.070 user 0m1.187s 00:06:47.070 sys 0m0.098s 00:06:47.070 02:26:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.070 02:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:47.070 ************************************ 00:06:47.070 END TEST accel_compare 00:06:47.070 ************************************ 00:06:47.070 02:26:20 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:47.070 02:26:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.070 02:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.070 02:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:47.331 ************************************ 00:06:47.331 START TEST accel_xor 00:06:47.331 ************************************ 00:06:47.331 02:26:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:47.331 02:26:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.331 02:26:20 -- accel/accel.sh@17 -- # local accel_module 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:47.331 02:26:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.331 02:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.331 02:26:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.331 02:26:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.331 02:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.331 02:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.331 02:26:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.331 02:26:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.331 02:26:20 -- accel/accel.sh@41 -- # jq -r . 00:06:47.331 [2024-04-27 02:26:20.783683] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:47.331 [2024-04-27 02:26:20.783747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125130 ] 00:06:47.331 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.331 [2024-04-27 02:26:20.846422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.331 [2024-04-27 02:26:20.912078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=0x1 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=xor 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=2 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=software 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=32 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=32 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=1 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val=Yes 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:47.331 02:26:20 -- accel/accel.sh@20 -- # val= 00:06:47.331 02:26:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # IFS=: 00:06:47.331 02:26:20 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.715 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.715 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.715 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.715 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.715 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.715 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.715 02:26:22 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:48.715 02:26:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.715 00:06:48.715 real 0m1.280s 00:06:48.715 user 0m1.181s 00:06:48.715 sys 0m0.100s 00:06:48.715 02:26:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.715 02:26:22 -- common/autotest_common.sh@10 -- # set +x 00:06:48.715 ************************************ 00:06:48.715 END TEST accel_xor 00:06:48.715 ************************************ 00:06:48.715 02:26:22 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:48.715 02:26:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.715 02:26:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.715 02:26:22 -- common/autotest_common.sh@10 -- # set +x 00:06:48.715 ************************************ 00:06:48.715 START TEST accel_xor 00:06:48.715 ************************************ 00:06:48.715 02:26:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:48.715 02:26:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.715 02:26:22 -- accel/accel.sh@17 -- # local accel_module 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.715 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.715 02:26:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:48.716 02:26:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:48.716 02:26:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.716 02:26:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.716 02:26:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.716 02:26:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.716 02:26:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.716 02:26:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.716 02:26:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.716 02:26:22 -- accel/accel.sh@41 -- # jq -r . 00:06:48.716 [2024-04-27 02:26:22.217707] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:48.716 [2024-04-27 02:26:22.217798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125466 ] 00:06:48.716 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.716 [2024-04-27 02:26:22.279592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.977 [2024-04-27 02:26:22.344192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=0x1 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=xor 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=3 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=software 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=32 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=32 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=1 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val=Yes 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:48.977 02:26:22 -- accel/accel.sh@20 -- # val= 00:06:48.977 02:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # IFS=: 00:06:48.977 02:26:22 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:49.920 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:49.920 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:49.920 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:49.920 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:49.920 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:49.920 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:49.920 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:49.920 02:26:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.920 02:26:23 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.920 02:26:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.920 00:06:49.920 real 0m1.279s 00:06:49.920 user 0m1.179s 00:06:49.920 sys 0m0.101s 00:06:49.920 02:26:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.920 02:26:23 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 ************************************ 00:06:49.920 END TEST accel_xor 00:06:49.920 ************************************ 00:06:49.920 02:26:23 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:49.920 02:26:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:49.920 02:26:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.920 02:26:23 -- common/autotest_common.sh@10 -- # set +x 00:06:50.216 ************************************ 00:06:50.216 START TEST accel_dif_verify 00:06:50.216 ************************************ 00:06:50.216 02:26:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:50.216 02:26:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.216 02:26:23 -- accel/accel.sh@17 -- # local accel_module 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:50.216 02:26:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.216 02:26:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.216 02:26:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.216 02:26:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.216 02:26:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.216 02:26:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.216 02:26:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.216 02:26:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.216 02:26:23 -- accel/accel.sh@41 -- # jq -r . 00:06:50.216 [2024-04-27 02:26:23.667637] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:50.216 [2024-04-27 02:26:23.667736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125825 ] 00:06:50.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.216 [2024-04-27 02:26:23.728474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.216 [2024-04-27 02:26:23.792153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=0x1 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=dif_verify 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=software 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=32 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=32 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=1 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.216 02:26:23 -- accel/accel.sh@20 -- # val=No 00:06:50.216 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.216 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.217 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 02:26:23 -- accel/accel.sh@20 -- # val= 00:06:50.217 02:26:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 02:26:23 -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 02:26:23 -- accel/accel.sh@19 -- # read -r var val 00:06:51.600 02:26:24 -- accel/accel.sh@20 -- # val= 00:06:51.600 02:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.600 02:26:24 -- accel/accel.sh@19 -- # IFS=: 00:06:51.600 02:26:24 -- accel/accel.sh@19 -- # read -r var val 00:06:51.600 02:26:24 -- accel/accel.sh@20 -- # val= 00:06:51.601 02:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # IFS=: 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # read -r var val 00:06:51.601 02:26:24 -- accel/accel.sh@20 -- # val= 00:06:51.601 02:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # IFS=: 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # read -r var val 00:06:51.601 02:26:24 -- accel/accel.sh@20 -- # val= 00:06:51.601 02:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # IFS=: 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # read -r var val 00:06:51.601 02:26:24 -- accel/accel.sh@20 -- # val= 00:06:51.601 02:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # IFS=: 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # read -r var val 00:06:51.601 02:26:24 -- accel/accel.sh@20 -- # val= 00:06:51.601 02:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # IFS=: 00:06:51.601 02:26:24 -- accel/accel.sh@19 -- # read -r var val 00:06:51.601 02:26:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.601 02:26:24 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:51.601 02:26:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.601 00:06:51.601 real 0m1.278s 00:06:51.601 user 0m1.181s 00:06:51.601 sys 0m0.099s 00:06:51.601 02:26:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.601 02:26:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.601 ************************************ 00:06:51.601 END TEST accel_dif_verify 00:06:51.601 ************************************ 00:06:51.601 02:26:24 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:51.601 02:26:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:51.601 02:26:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.601 02:26:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.601 ************************************ 00:06:51.601 START TEST accel_dif_generate 00:06:51.601 ************************************ 00:06:51.601 02:26:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:51.601 02:26:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.601 02:26:25 -- accel/accel.sh@17 -- # local accel_module 00:06:51.601 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.601 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.601 02:26:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:51.601 02:26:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.601 02:26:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.601 02:26:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.601 02:26:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.601 02:26:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.601 02:26:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.601 02:26:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.601 02:26:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.601 02:26:25 -- accel/accel.sh@41 -- # jq -r . 00:06:51.601 [2024-04-27 02:26:25.113405] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:51.601 [2024-04-27 02:26:25.113468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126182 ] 00:06:51.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.601 [2024-04-27 02:26:25.175436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.862 [2024-04-27 02:26:25.242407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val=0x1 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val=dif_generate 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.862 02:26:25 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:51.862 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.862 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val=software 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val=32 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val=32 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val=1 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val=No 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:51.863 02:26:25 -- accel/accel.sh@20 -- # val= 00:06:51.863 02:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # IFS=: 00:06:51.863 02:26:25 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:52.806 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:52.806 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:52.806 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:52.806 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:52.806 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:52.806 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:52.806 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:52.806 02:26:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.806 02:26:26 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:52.806 02:26:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.806 00:06:52.806 real 0m1.281s 00:06:52.806 user 0m1.186s 00:06:52.806 sys 0m0.097s 00:06:52.806 02:26:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.806 02:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:52.806 ************************************ 00:06:52.806 END TEST accel_dif_generate 00:06:52.806 ************************************ 00:06:52.806 02:26:26 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:52.806 02:26:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:52.806 02:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.806 02:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:53.068 ************************************ 00:06:53.068 START TEST accel_dif_generate_copy 00:06:53.068 ************************************ 00:06:53.068 02:26:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.068 02:26:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.068 02:26:26 -- accel/accel.sh@17 -- # local accel_module 00:06:53.068 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.068 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.068 02:26:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.068 02:26:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.068 02:26:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.068 02:26:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.068 02:26:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.068 02:26:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.068 02:26:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.068 02:26:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.068 02:26:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.068 02:26:26 -- accel/accel.sh@41 -- # jq -r . 00:06:53.068 [2024-04-27 02:26:26.568407] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:53.068 [2024-04-27 02:26:26.568508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126544 ] 00:06:53.068 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.068 [2024-04-27 02:26:26.630921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.330 [2024-04-27 02:26:26.699823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=0x1 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=software 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=32 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=32 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=1 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val=No 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:53.330 02:26:26 -- accel/accel.sh@20 -- # val= 00:06:53.330 02:26:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # IFS=: 00:06:53.330 02:26:26 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@20 -- # val= 00:06:54.275 02:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@20 -- # val= 00:06:54.275 02:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@20 -- # val= 00:06:54.275 02:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@20 -- # val= 00:06:54.275 02:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@20 -- # val= 00:06:54.275 02:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@20 -- # val= 00:06:54.275 02:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.275 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.275 02:26:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.275 02:26:27 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:54.275 02:26:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.275 00:06:54.275 real 0m1.285s 00:06:54.275 user 0m1.185s 00:06:54.275 sys 0m0.102s 00:06:54.275 02:26:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.275 02:26:27 -- common/autotest_common.sh@10 -- # set +x 00:06:54.275 ************************************ 00:06:54.275 END TEST accel_dif_generate_copy 00:06:54.275 ************************************ 00:06:54.275 02:26:27 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:54.275 02:26:27 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.275 02:26:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:54.275 02:26:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.275 02:26:27 -- common/autotest_common.sh@10 -- # set +x 00:06:54.536 ************************************ 00:06:54.536 START TEST accel_comp 00:06:54.536 ************************************ 00:06:54.536 02:26:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.536 02:26:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.536 02:26:27 -- accel/accel.sh@17 -- # local accel_module 00:06:54.536 02:26:27 -- accel/accel.sh@19 -- # IFS=: 00:06:54.536 02:26:27 -- accel/accel.sh@19 -- # read -r var val 00:06:54.536 02:26:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.536 02:26:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.536 02:26:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.536 02:26:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.536 02:26:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.536 02:26:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.536 02:26:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.536 02:26:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.536 02:26:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:54.536 02:26:27 -- accel/accel.sh@41 -- # jq -r . 00:06:54.536 [2024-04-27 02:26:28.005100] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:54.536 [2024-04-27 02:26:28.005200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126848 ] 00:06:54.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.536 [2024-04-27 02:26:28.068229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.536 [2024-04-27 02:26:28.135971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=0x1 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=compress 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=software 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=32 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=32 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=1 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val=No 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:54.798 02:26:28 -- accel/accel.sh@20 -- # val= 00:06:54.798 02:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # IFS=: 00:06:54.798 02:26:28 -- accel/accel.sh@19 -- # read -r var val 00:06:55.743 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:55.743 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:55.743 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:55.743 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:55.743 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:55.743 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:55.743 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:55.743 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:55.743 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:55.743 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:55.743 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:55.743 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.743 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:55.744 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:55.744 02:26:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.744 02:26:29 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:55.744 02:26:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.744 00:06:55.744 real 0m1.288s 00:06:55.744 user 0m1.183s 00:06:55.744 sys 0m0.107s 00:06:55.744 02:26:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.744 02:26:29 -- common/autotest_common.sh@10 -- # set +x 00:06:55.744 ************************************ 00:06:55.744 END TEST accel_comp 00:06:55.744 ************************************ 00:06:55.744 02:26:29 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.744 02:26:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.744 02:26:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.744 02:26:29 -- common/autotest_common.sh@10 -- # set +x 00:06:56.006 ************************************ 00:06:56.006 START TEST accel_decomp 00:06:56.006 ************************************ 00:06:56.006 02:26:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.006 02:26:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.006 02:26:29 -- accel/accel.sh@17 -- # local accel_module 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.006 02:26:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.006 02:26:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.006 02:26:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.006 02:26:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.006 02:26:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.006 02:26:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.006 02:26:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.006 02:26:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.006 02:26:29 -- accel/accel.sh@41 -- # jq -r . 00:06:56.006 [2024-04-27 02:26:29.430555] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:56.006 [2024-04-27 02:26:29.430618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127077 ] 00:06:56.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.006 [2024-04-27 02:26:29.492708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.006 [2024-04-27 02:26:29.558941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=0x1 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=decompress 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=software 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=32 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=32 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=1 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val=Yes 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:56.006 02:26:29 -- accel/accel.sh@20 -- # val= 00:06:56.006 02:26:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # IFS=: 00:06:56.006 02:26:29 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@20 -- # val= 00:06:57.394 02:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@20 -- # val= 00:06:57.394 02:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@20 -- # val= 00:06:57.394 02:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@20 -- # val= 00:06:57.394 02:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@20 -- # val= 00:06:57.394 02:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@20 -- # val= 00:06:57.394 02:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.394 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.394 02:26:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.394 02:26:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.394 02:26:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.394 00:06:57.394 real 0m1.283s 00:06:57.394 user 0m1.184s 00:06:57.394 sys 0m0.100s 00:06:57.394 02:26:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.394 02:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:57.394 ************************************ 00:06:57.394 END TEST accel_decomp 00:06:57.395 ************************************ 00:06:57.395 02:26:30 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.395 02:26:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:57.395 02:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.395 02:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:57.395 ************************************ 00:06:57.395 START TEST accel_decmop_full 00:06:57.395 ************************************ 00:06:57.395 02:26:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.395 02:26:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.395 02:26:30 -- accel/accel.sh@17 -- # local accel_module 00:06:57.395 02:26:30 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:30 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.395 02:26:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.395 02:26:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.395 02:26:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.395 02:26:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.395 02:26:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.395 02:26:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.395 02:26:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.395 02:26:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:57.395 02:26:30 -- accel/accel.sh@41 -- # jq -r . 00:06:57.395 [2024-04-27 02:26:30.848880] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:57.395 [2024-04-27 02:26:30.848972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127315 ] 00:06:57.395 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.395 [2024-04-27 02:26:30.911908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.395 [2024-04-27 02:26:30.977783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val=0x1 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val=decompress 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.395 02:26:31 -- accel/accel.sh@20 -- # val=software 00:06:57.395 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.395 02:26:31 -- accel/accel.sh@22 -- # accel_module=software 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.395 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val=32 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val=32 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val=1 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val=Yes 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:57.657 02:26:31 -- accel/accel.sh@20 -- # val= 00:06:57.657 02:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # IFS=: 00:06:57.657 02:26:31 -- accel/accel.sh@19 -- # read -r var val 00:06:58.601 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.601 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.601 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.601 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.601 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.601 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.601 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.601 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.601 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.602 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.602 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.602 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.602 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.602 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.602 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.602 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.602 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.602 02:26:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.602 02:26:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.602 02:26:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.602 00:06:58.602 real 0m1.291s 00:06:58.602 user 0m1.197s 00:06:58.602 sys 0m0.095s 00:06:58.602 02:26:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.602 02:26:32 -- common/autotest_common.sh@10 -- # set +x 00:06:58.602 ************************************ 00:06:58.602 END TEST accel_decmop_full 00:06:58.602 ************************************ 00:06:58.602 02:26:32 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.602 02:26:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:58.602 02:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.602 02:26:32 -- common/autotest_common.sh@10 -- # set +x 00:06:58.863 ************************************ 00:06:58.863 START TEST accel_decomp_mcore 00:06:58.863 ************************************ 00:06:58.863 02:26:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.863 02:26:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.863 02:26:32 -- accel/accel.sh@17 -- # local accel_module 00:06:58.863 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.863 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.863 02:26:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.863 02:26:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.863 02:26:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.863 02:26:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.863 02:26:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.863 02:26:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.863 02:26:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.863 02:26:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.863 02:26:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.863 02:26:32 -- accel/accel.sh@41 -- # jq -r . 00:06:58.863 [2024-04-27 02:26:32.288111] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:06:58.863 [2024-04-27 02:26:32.288178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127659 ] 00:06:58.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.863 [2024-04-27 02:26:32.349832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.863 [2024-04-27 02:26:32.417519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.863 [2024-04-27 02:26:32.417717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.863 [2024-04-27 02:26:32.417871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.863 [2024-04-27 02:26:32.417876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=0xf 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=decompress 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=software 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=32 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=32 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=1 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val=Yes 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:06:58.864 02:26:32 -- accel/accel.sh@20 -- # val= 00:06:58.864 02:26:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # IFS=: 00:06:58.864 02:26:32 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.251 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.251 02:26:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.251 02:26:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.251 00:07:00.251 real 0m1.295s 00:07:00.251 user 0m4.434s 00:07:00.251 sys 0m0.107s 00:07:00.251 02:26:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.251 02:26:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.251 ************************************ 00:07:00.251 END TEST accel_decomp_mcore 00:07:00.251 ************************************ 00:07:00.251 02:26:33 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.251 02:26:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:00.251 02:26:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.251 02:26:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.251 ************************************ 00:07:00.251 START TEST accel_decomp_full_mcore 00:07:00.251 ************************************ 00:07:00.251 02:26:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.251 02:26:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.251 02:26:33 -- accel/accel.sh@17 -- # local accel_module 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.251 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.251 02:26:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.251 02:26:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.251 02:26:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.251 02:26:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.251 02:26:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.251 02:26:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.251 02:26:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.251 02:26:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.251 02:26:33 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.251 02:26:33 -- accel/accel.sh@41 -- # jq -r . 00:07:00.251 [2024-04-27 02:26:33.760922] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:00.251 [2024-04-27 02:26:33.761013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128023 ] 00:07:00.251 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.251 [2024-04-27 02:26:33.823142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.512 [2024-04-27 02:26:33.888274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.512 [2024-04-27 02:26:33.888423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.512 [2024-04-27 02:26:33.888606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.512 [2024-04-27 02:26:33.888610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=0xf 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=decompress 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=software 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=32 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=32 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=1 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val=Yes 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:00.512 02:26:33 -- accel/accel.sh@20 -- # val= 00:07:00.512 02:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # IFS=: 00:07:00.512 02:26:33 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.456 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.456 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.456 02:26:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.456 02:26:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.456 02:26:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.457 00:07:01.457 real 0m1.309s 00:07:01.457 user 0m4.488s 00:07:01.457 sys 0m0.109s 00:07:01.457 02:26:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.457 02:26:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.457 ************************************ 00:07:01.457 END TEST accel_decomp_full_mcore 00:07:01.457 ************************************ 00:07:01.718 02:26:35 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.718 02:26:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:01.718 02:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.718 02:26:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.718 ************************************ 00:07:01.718 START TEST accel_decomp_mthread 00:07:01.718 ************************************ 00:07:01.718 02:26:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.718 02:26:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.718 02:26:35 -- accel/accel.sh@17 -- # local accel_module 00:07:01.718 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.718 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.718 02:26:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.718 02:26:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.718 02:26:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.718 02:26:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.718 02:26:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.718 02:26:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.718 02:26:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.718 02:26:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.718 02:26:35 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.718 02:26:35 -- accel/accel.sh@41 -- # jq -r . 00:07:01.718 [2024-04-27 02:26:35.246755] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:01.718 [2024-04-27 02:26:35.246840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128386 ] 00:07:01.718 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.718 [2024-04-27 02:26:35.317485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.979 [2024-04-27 02:26:35.382941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=0x1 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=decompress 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=software 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=32 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=32 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=2 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val=Yes 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:01.979 02:26:35 -- accel/accel.sh@20 -- # val= 00:07:01.979 02:26:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # IFS=: 00:07:01.979 02:26:35 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:02.922 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:02.922 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:02.922 02:26:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.922 02:26:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.922 02:26:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.922 00:07:02.922 real 0m1.300s 00:07:02.922 user 0m1.209s 00:07:02.922 sys 0m0.103s 00:07:02.922 02:26:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.922 02:26:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.922 ************************************ 00:07:02.922 END TEST accel_decomp_mthread 00:07:02.922 ************************************ 00:07:03.183 02:26:36 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.183 02:26:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:03.183 02:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.183 02:26:36 -- common/autotest_common.sh@10 -- # set +x 00:07:03.183 ************************************ 00:07:03.183 START TEST accel_deomp_full_mthread 00:07:03.183 ************************************ 00:07:03.183 02:26:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.183 02:26:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.183 02:26:36 -- accel/accel.sh@17 -- # local accel_module 00:07:03.183 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.183 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.183 02:26:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.183 02:26:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.183 02:26:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.183 02:26:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.183 02:26:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.183 02:26:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.183 02:26:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.183 02:26:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.183 02:26:36 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.183 02:26:36 -- accel/accel.sh@41 -- # jq -r . 00:07:03.183 [2024-04-27 02:26:36.738044] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:03.183 [2024-04-27 02:26:36.738142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128749 ] 00:07:03.183 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.183 [2024-04-27 02:26:36.802875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.444 [2024-04-27 02:26:36.873772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=0x1 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=decompress 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=software 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@22 -- # accel_module=software 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=32 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=32 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val=2 00:07:03.444 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 02:26:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.445 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.445 02:26:36 -- accel/accel.sh@20 -- # val=Yes 00:07:03.445 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.445 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.445 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:03.445 02:26:36 -- accel/accel.sh@20 -- # val= 00:07:03.445 02:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # IFS=: 00:07:03.445 02:26:36 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@20 -- # val= 00:07:04.834 02:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # IFS=: 00:07:04.834 02:26:38 -- accel/accel.sh@19 -- # read -r var val 00:07:04.834 02:26:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.834 02:26:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.834 02:26:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.834 00:07:04.834 real 0m1.331s 00:07:04.834 user 0m1.229s 00:07:04.834 sys 0m0.114s 00:07:04.834 02:26:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.834 02:26:38 -- common/autotest_common.sh@10 -- # set +x 00:07:04.834 ************************************ 00:07:04.834 END TEST accel_deomp_full_mthread 00:07:04.834 ************************************ 00:07:04.834 02:26:38 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:04.834 02:26:38 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:04.834 02:26:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:04.834 02:26:38 -- accel/accel.sh@137 -- # build_accel_config 00:07:04.834 02:26:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.834 02:26:38 -- common/autotest_common.sh@10 -- # set +x 00:07:04.834 02:26:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.834 02:26:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.834 02:26:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.834 02:26:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.834 02:26:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.834 02:26:38 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.834 02:26:38 -- accel/accel.sh@41 -- # jq -r . 00:07:04.834 ************************************ 00:07:04.834 START TEST accel_dif_functional_tests 00:07:04.834 ************************************ 00:07:04.834 02:26:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:04.834 [2024-04-27 02:26:38.268934] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:04.834 [2024-04-27 02:26:38.268987] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129079 ] 00:07:04.834 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.834 [2024-04-27 02:26:38.332152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.834 [2024-04-27 02:26:38.403576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.834 [2024-04-27 02:26:38.403714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.834 [2024-04-27 02:26:38.403718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.097 00:07:05.097 00:07:05.097 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.097 http://cunit.sourceforge.net/ 00:07:05.097 00:07:05.097 00:07:05.097 Suite: accel_dif 00:07:05.097 Test: verify: DIF generated, GUARD check ...passed 00:07:05.097 Test: verify: DIF generated, APPTAG check ...passed 00:07:05.097 Test: verify: DIF generated, REFTAG check ...passed 00:07:05.097 Test: verify: DIF not generated, GUARD check ...[2024-04-27 02:26:38.459546] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.097 [2024-04-27 02:26:38.459583] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.097 passed 00:07:05.097 Test: verify: DIF not generated, APPTAG check ...[2024-04-27 02:26:38.459615] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.097 [2024-04-27 02:26:38.459630] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.097 passed 00:07:05.097 Test: verify: DIF not generated, REFTAG check ...[2024-04-27 02:26:38.459645] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.097 [2024-04-27 02:26:38.459660] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.097 passed 00:07:05.097 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:05.097 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-27 02:26:38.459702] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:05.097 passed 00:07:05.097 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:05.097 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:05.097 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:05.097 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-27 02:26:38.459817] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:05.097 passed 00:07:05.097 Test: generate copy: DIF generated, GUARD check ...passed 00:07:05.097 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:05.097 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:05.097 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:05.097 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:05.097 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:05.097 Test: generate copy: iovecs-len validate ...[2024-04-27 02:26:38.460004] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:05.097 passed 00:07:05.097 Test: generate copy: buffer alignment validate ...passed 00:07:05.097 00:07:05.097 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.097 suites 1 1 n/a 0 0 00:07:05.097 tests 20 20 20 0 0 00:07:05.097 asserts 204 204 204 0 n/a 00:07:05.097 00:07:05.097 Elapsed time = 0.000 seconds 00:07:05.097 00:07:05.097 real 0m0.354s 00:07:05.097 user 0m0.446s 00:07:05.097 sys 0m0.128s 00:07:05.097 02:26:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.097 02:26:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.097 ************************************ 00:07:05.097 END TEST accel_dif_functional_tests 00:07:05.097 ************************************ 00:07:05.097 00:07:05.097 real 0m32.243s 00:07:05.097 user 0m34.386s 00:07:05.097 sys 0m5.048s 00:07:05.097 02:26:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.097 02:26:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.097 ************************************ 00:07:05.097 END TEST accel 00:07:05.097 ************************************ 00:07:05.097 02:26:38 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:05.097 02:26:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.097 02:26:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.097 02:26:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.359 ************************************ 00:07:05.359 START TEST accel_rpc 00:07:05.359 ************************************ 00:07:05.359 02:26:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:05.359 * Looking for test storage... 00:07:05.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:05.359 02:26:38 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.359 02:26:38 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4129184 00:07:05.359 02:26:38 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:05.359 02:26:38 -- accel/accel_rpc.sh@15 -- # waitforlisten 4129184 00:07:05.359 02:26:38 -- common/autotest_common.sh@817 -- # '[' -z 4129184 ']' 00:07:05.359 02:26:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.359 02:26:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.359 02:26:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.359 02:26:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.359 02:26:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.359 [2024-04-27 02:26:38.913565] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:05.359 [2024-04-27 02:26:38.913600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129184 ] 00:07:05.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.620 [2024-04-27 02:26:38.986474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.620 [2024-04-27 02:26:39.048669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.620 02:26:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.620 02:26:39 -- common/autotest_common.sh@850 -- # return 0 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:05.620 02:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.620 02:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.620 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.620 ************************************ 00:07:05.620 START TEST accel_assign_opcode 00:07:05.620 ************************************ 00:07:05.620 02:26:39 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:05.620 02:26:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.620 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.620 [2024-04-27 02:26:39.221378] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:05.620 02:26:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.620 02:26:39 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:05.620 02:26:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.620 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.620 [2024-04-27 02:26:39.229391] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:05.621 02:26:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.621 02:26:39 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:05.621 02:26:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.621 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.883 02:26:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.883 02:26:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:05.883 02:26:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:05.883 02:26:39 -- accel/accel_rpc.sh@42 -- # grep software 00:07:05.883 02:26:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.883 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.883 02:26:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.883 software 00:07:05.883 00:07:05.883 real 0m0.220s 00:07:05.883 user 0m0.048s 00:07:05.883 sys 0m0.009s 00:07:05.883 02:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.883 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.883 ************************************ 00:07:05.883 END TEST accel_assign_opcode 00:07:05.883 ************************************ 00:07:05.883 02:26:39 -- accel/accel_rpc.sh@55 -- # killprocess 4129184 00:07:05.883 02:26:39 -- common/autotest_common.sh@936 -- # '[' -z 4129184 ']' 00:07:05.883 02:26:39 -- common/autotest_common.sh@940 -- # kill -0 4129184 00:07:05.883 02:26:39 -- common/autotest_common.sh@941 -- # uname 00:07:05.883 02:26:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.883 02:26:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4129184 00:07:06.144 02:26:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.144 02:26:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.144 02:26:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4129184' 00:07:06.144 killing process with pid 4129184 00:07:06.144 02:26:39 -- common/autotest_common.sh@955 -- # kill 4129184 00:07:06.144 02:26:39 -- common/autotest_common.sh@960 -- # wait 4129184 00:07:06.144 00:07:06.144 real 0m0.942s 00:07:06.144 user 0m0.974s 00:07:06.144 sys 0m0.399s 00:07:06.144 02:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.144 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.144 ************************************ 00:07:06.144 END TEST accel_rpc 00:07:06.144 ************************************ 00:07:06.405 02:26:39 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:06.405 02:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.405 02:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.405 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.405 ************************************ 00:07:06.405 START TEST app_cmdline 00:07:06.405 ************************************ 00:07:06.405 02:26:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:06.405 * Looking for test storage... 00:07:06.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.405 02:26:39 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:06.405 02:26:39 -- app/cmdline.sh@17 -- # spdk_tgt_pid=4129597 00:07:06.405 02:26:39 -- app/cmdline.sh@18 -- # waitforlisten 4129597 00:07:06.406 02:26:39 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:06.406 02:26:39 -- common/autotest_common.sh@817 -- # '[' -z 4129597 ']' 00:07:06.406 02:26:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.406 02:26:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.406 02:26:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.406 02:26:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.406 02:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 [2024-04-27 02:26:40.045330] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:06.666 [2024-04-27 02:26:40.045383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129597 ] 00:07:06.666 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.666 [2024-04-27 02:26:40.105119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.666 [2024-04-27 02:26:40.167541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.238 02:26:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.238 02:26:40 -- common/autotest_common.sh@850 -- # return 0 00:07:07.238 02:26:40 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:07.500 { 00:07:07.500 "version": "SPDK v24.05-pre git sha1 6651b13f7", 00:07:07.500 "fields": { 00:07:07.500 "major": 24, 00:07:07.500 "minor": 5, 00:07:07.500 "patch": 0, 00:07:07.500 "suffix": "-pre", 00:07:07.500 "commit": "6651b13f7" 00:07:07.500 } 00:07:07.500 } 00:07:07.500 02:26:40 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:07.500 02:26:40 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:07.500 02:26:40 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:07.500 02:26:40 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:07.500 02:26:40 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:07.500 02:26:40 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:07.500 02:26:40 -- app/cmdline.sh@26 -- # sort 00:07:07.500 02:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:07.500 02:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:07.500 02:26:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:07.500 02:26:40 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:07.500 02:26:40 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:07.500 02:26:40 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.500 02:26:40 -- common/autotest_common.sh@638 -- # local es=0 00:07:07.500 02:26:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.500 02:26:40 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.500 02:26:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.500 02:26:40 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.500 02:26:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.500 02:26:40 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.500 02:26:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.500 02:26:40 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.500 02:26:40 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.500 02:26:40 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.761 request: 00:07:07.761 { 00:07:07.761 "method": "env_dpdk_get_mem_stats", 00:07:07.761 "req_id": 1 00:07:07.761 } 00:07:07.761 Got JSON-RPC error response 00:07:07.761 response: 00:07:07.761 { 00:07:07.761 "code": -32601, 00:07:07.761 "message": "Method not found" 00:07:07.761 } 00:07:07.761 02:26:41 -- common/autotest_common.sh@641 -- # es=1 00:07:07.761 02:26:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:07.761 02:26:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:07.761 02:26:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:07.761 02:26:41 -- app/cmdline.sh@1 -- # killprocess 4129597 00:07:07.761 02:26:41 -- common/autotest_common.sh@936 -- # '[' -z 4129597 ']' 00:07:07.761 02:26:41 -- common/autotest_common.sh@940 -- # kill -0 4129597 00:07:07.761 02:26:41 -- common/autotest_common.sh@941 -- # uname 00:07:07.761 02:26:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.761 02:26:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4129597 00:07:07.761 02:26:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.761 02:26:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.761 02:26:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4129597' 00:07:07.761 killing process with pid 4129597 00:07:07.761 02:26:41 -- common/autotest_common.sh@955 -- # kill 4129597 00:07:07.761 02:26:41 -- common/autotest_common.sh@960 -- # wait 4129597 00:07:08.022 00:07:08.022 real 0m1.479s 00:07:08.022 user 0m1.780s 00:07:08.022 sys 0m0.363s 00:07:08.022 02:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.022 02:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.022 ************************************ 00:07:08.022 END TEST app_cmdline 00:07:08.022 ************************************ 00:07:08.022 02:26:41 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.022 02:26:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.022 02:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.022 02:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.022 ************************************ 00:07:08.022 START TEST version 00:07:08.022 ************************************ 00:07:08.022 02:26:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.284 * Looking for test storage... 00:07:08.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.284 02:26:41 -- app/version.sh@17 -- # get_header_version major 00:07:08.284 02:26:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.284 02:26:41 -- app/version.sh@14 -- # cut -f2 00:07:08.284 02:26:41 -- app/version.sh@14 -- # tr -d '"' 00:07:08.284 02:26:41 -- app/version.sh@17 -- # major=24 00:07:08.284 02:26:41 -- app/version.sh@18 -- # get_header_version minor 00:07:08.284 02:26:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.284 02:26:41 -- app/version.sh@14 -- # cut -f2 00:07:08.284 02:26:41 -- app/version.sh@14 -- # tr -d '"' 00:07:08.284 02:26:41 -- app/version.sh@18 -- # minor=5 00:07:08.284 02:26:41 -- app/version.sh@19 -- # get_header_version patch 00:07:08.284 02:26:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.284 02:26:41 -- app/version.sh@14 -- # cut -f2 00:07:08.284 02:26:41 -- app/version.sh@14 -- # tr -d '"' 00:07:08.284 02:26:41 -- app/version.sh@19 -- # patch=0 00:07:08.284 02:26:41 -- app/version.sh@20 -- # get_header_version suffix 00:07:08.284 02:26:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.284 02:26:41 -- app/version.sh@14 -- # cut -f2 00:07:08.284 02:26:41 -- app/version.sh@14 -- # tr -d '"' 00:07:08.284 02:26:41 -- app/version.sh@20 -- # suffix=-pre 00:07:08.284 02:26:41 -- app/version.sh@22 -- # version=24.5 00:07:08.284 02:26:41 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.284 02:26:41 -- app/version.sh@28 -- # version=24.5rc0 00:07:08.285 02:26:41 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.285 02:26:41 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.285 02:26:41 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:08.285 02:26:41 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:08.285 00:07:08.285 real 0m0.176s 00:07:08.285 user 0m0.095s 00:07:08.285 sys 0m0.115s 00:07:08.285 02:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.285 02:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.285 ************************************ 00:07:08.285 END TEST version 00:07:08.285 ************************************ 00:07:08.285 02:26:41 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@194 -- # uname -s 00:07:08.285 02:26:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:08.285 02:26:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.285 02:26:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.285 02:26:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:08.285 02:26:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:08.285 02:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.285 02:26:41 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:08.285 02:26:41 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:07:08.285 02:26:41 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.285 02:26:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:08.285 02:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.285 02:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.546 ************************************ 00:07:08.546 START TEST nvmf_tcp 00:07:08.546 ************************************ 00:07:08.546 02:26:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.546 * Looking for test storage... 00:07:08.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.546 02:26:42 -- nvmf/common.sh@7 -- # uname -s 00:07:08.546 02:26:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.546 02:26:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.546 02:26:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.546 02:26:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.546 02:26:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.546 02:26:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.546 02:26:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.546 02:26:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.546 02:26:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.546 02:26:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.546 02:26:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.546 02:26:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.546 02:26:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.546 02:26:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.546 02:26:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.546 02:26:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.546 02:26:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.546 02:26:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.546 02:26:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.546 02:26:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.546 02:26:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.546 02:26:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.546 02:26:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.546 02:26:42 -- paths/export.sh@5 -- # export PATH 00:07:08.546 02:26:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.546 02:26:42 -- nvmf/common.sh@47 -- # : 0 00:07:08.546 02:26:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.546 02:26:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.546 02:26:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.546 02:26:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.546 02:26:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.546 02:26:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.546 02:26:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.546 02:26:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:08.546 02:26:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:08.546 02:26:42 -- common/autotest_common.sh@10 -- # set +x 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:08.546 02:26:42 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:08.546 02:26:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:08.546 02:26:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.546 02:26:42 -- common/autotest_common.sh@10 -- # set +x 00:07:08.807 ************************************ 00:07:08.807 START TEST nvmf_example 00:07:08.807 ************************************ 00:07:08.807 02:26:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:08.807 * Looking for test storage... 00:07:08.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.807 02:26:42 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.807 02:26:42 -- nvmf/common.sh@7 -- # uname -s 00:07:08.807 02:26:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.807 02:26:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.808 02:26:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.808 02:26:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.808 02:26:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.808 02:26:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.808 02:26:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.808 02:26:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.808 02:26:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.808 02:26:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.808 02:26:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.808 02:26:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.808 02:26:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.808 02:26:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.808 02:26:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.808 02:26:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.808 02:26:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.808 02:26:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.808 02:26:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.808 02:26:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.808 02:26:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.808 02:26:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.808 02:26:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.808 02:26:42 -- paths/export.sh@5 -- # export PATH 00:07:08.808 02:26:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.808 02:26:42 -- nvmf/common.sh@47 -- # : 0 00:07:08.808 02:26:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.808 02:26:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.808 02:26:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.808 02:26:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.808 02:26:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.808 02:26:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.808 02:26:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.808 02:26:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.808 02:26:42 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:08.808 02:26:42 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:08.808 02:26:42 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:08.808 02:26:42 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:08.808 02:26:42 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:08.808 02:26:42 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:08.808 02:26:42 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:08.808 02:26:42 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:08.808 02:26:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:08.808 02:26:42 -- common/autotest_common.sh@10 -- # set +x 00:07:08.808 02:26:42 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:08.808 02:26:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:08.808 02:26:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.808 02:26:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:08.808 02:26:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:08.808 02:26:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:08.808 02:26:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.808 02:26:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.808 02:26:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.808 02:26:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:08.808 02:26:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:08.808 02:26:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.808 02:26:42 -- common/autotest_common.sh@10 -- # set +x 00:07:15.391 02:26:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:15.391 02:26:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.391 02:26:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.391 02:26:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.391 02:26:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.391 02:26:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.391 02:26:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.391 02:26:48 -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.391 02:26:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.391 02:26:48 -- nvmf/common.sh@296 -- # e810=() 00:07:15.391 02:26:48 -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.391 02:26:48 -- nvmf/common.sh@297 -- # x722=() 00:07:15.391 02:26:48 -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.391 02:26:48 -- nvmf/common.sh@298 -- # mlx=() 00:07:15.391 02:26:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.391 02:26:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.391 02:26:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.391 02:26:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.391 02:26:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.391 02:26:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.391 02:26:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.391 02:26:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.391 02:26:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.391 02:26:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.391 02:26:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.391 02:26:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.391 02:26:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:15.391 02:26:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.391 02:26:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.391 02:26:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.391 02:26:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.391 02:26:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.391 02:26:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:15.391 02:26:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.391 02:26:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.391 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.391 02:26:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.391 02:26:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:15.391 02:26:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:15.391 02:26:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:15.391 02:26:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.391 02:26:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.391 02:26:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.391 02:26:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.391 02:26:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.391 02:26:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.391 02:26:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.391 02:26:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.391 02:26:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.391 02:26:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.391 02:26:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.391 02:26:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.391 02:26:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.391 02:26:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.391 02:26:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.391 02:26:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.391 02:26:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.391 02:26:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.391 02:26:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.391 02:26:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:07:15.391 00:07:15.391 --- 10.0.0.2 ping statistics --- 00:07:15.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.391 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:07:15.391 02:26:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:07:15.391 00:07:15.391 --- 10.0.0.1 ping statistics --- 00:07:15.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.391 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:07:15.391 02:26:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.391 02:26:48 -- nvmf/common.sh@411 -- # return 0 00:07:15.391 02:26:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:15.391 02:26:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.391 02:26:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:15.391 02:26:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.391 02:26:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:15.391 02:26:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:15.391 02:26:48 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:15.391 02:26:48 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:15.391 02:26:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:15.391 02:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.391 02:26:48 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:15.391 02:26:48 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:15.391 02:26:48 -- target/nvmf_example.sh@34 -- # nvmfpid=4133717 00:07:15.391 02:26:48 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.391 02:26:48 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:15.391 02:26:48 -- target/nvmf_example.sh@36 -- # waitforlisten 4133717 00:07:15.391 02:26:48 -- common/autotest_common.sh@817 -- # '[' -z 4133717 ']' 00:07:15.391 02:26:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.391 02:26:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.391 02:26:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.391 02:26:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.391 02:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.225 02:26:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.225 02:26:49 -- common/autotest_common.sh@850 -- # return 0 00:07:16.225 02:26:49 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:16.225 02:26:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:16.225 02:26:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.486 02:26:49 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.486 02:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.486 02:26:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.486 02:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.486 02:26:49 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:16.486 02:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.486 02:26:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.486 02:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.486 02:26:49 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:16.486 02:26:49 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.486 02:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.486 02:26:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.486 02:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.486 02:26:49 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:16.486 02:26:49 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:16.486 02:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.486 02:26:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.486 02:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.486 02:26:49 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.486 02:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.486 02:26:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.486 02:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.486 02:26:49 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:16.486 02:26:49 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:16.486 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.792 Initializing NVMe Controllers 00:07:28.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.792 Initialization complete. Launching workers. 00:07:28.792 ======================================================== 00:07:28.792 Latency(us) 00:07:28.792 Device Information : IOPS MiB/s Average min max 00:07:28.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13638.70 53.28 4695.05 867.74 16058.45 00:07:28.792 ======================================================== 00:07:28.792 Total : 13638.70 53.28 4695.05 867.74 16058.45 00:07:28.792 00:07:28.792 02:27:00 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:28.792 02:27:00 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:28.792 02:27:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:28.792 02:27:00 -- nvmf/common.sh@117 -- # sync 00:07:28.792 02:27:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.792 02:27:00 -- nvmf/common.sh@120 -- # set +e 00:07:28.792 02:27:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.792 02:27:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.792 rmmod nvme_tcp 00:07:28.792 rmmod nvme_fabrics 00:07:28.792 rmmod nvme_keyring 00:07:28.792 02:27:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.792 02:27:00 -- nvmf/common.sh@124 -- # set -e 00:07:28.792 02:27:00 -- nvmf/common.sh@125 -- # return 0 00:07:28.792 02:27:00 -- nvmf/common.sh@478 -- # '[' -n 4133717 ']' 00:07:28.792 02:27:00 -- nvmf/common.sh@479 -- # killprocess 4133717 00:07:28.792 02:27:00 -- common/autotest_common.sh@936 -- # '[' -z 4133717 ']' 00:07:28.792 02:27:00 -- common/autotest_common.sh@940 -- # kill -0 4133717 00:07:28.792 02:27:00 -- common/autotest_common.sh@941 -- # uname 00:07:28.792 02:27:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.792 02:27:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4133717 00:07:28.792 02:27:00 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:28.792 02:27:00 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:28.792 02:27:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4133717' 00:07:28.792 killing process with pid 4133717 00:07:28.792 02:27:00 -- common/autotest_common.sh@955 -- # kill 4133717 00:07:28.792 02:27:00 -- common/autotest_common.sh@960 -- # wait 4133717 00:07:28.792 nvmf threads initialize successfully 00:07:28.792 bdev subsystem init successfully 00:07:28.792 created a nvmf target service 00:07:28.792 create targets's poll groups done 00:07:28.792 all subsystems of target started 00:07:28.792 nvmf target is running 00:07:28.792 all subsystems of target stopped 00:07:28.792 destroy targets's poll groups done 00:07:28.792 destroyed the nvmf target service 00:07:28.792 bdev subsystem finish successfully 00:07:28.792 nvmf threads destroy successfully 00:07:28.792 02:27:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:28.792 02:27:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:28.792 02:27:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:28.792 02:27:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.792 02:27:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.792 02:27:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.792 02:27:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.792 02:27:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.053 02:27:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.053 02:27:02 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:29.053 02:27:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:29.053 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:29.053 00:07:29.053 real 0m20.278s 00:07:29.053 user 0m46.093s 00:07:29.053 sys 0m6.041s 00:07:29.053 02:27:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.053 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:29.053 ************************************ 00:07:29.053 END TEST nvmf_example 00:07:29.053 ************************************ 00:07:29.053 02:27:02 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:29.053 02:27:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:29.053 02:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.053 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:29.317 ************************************ 00:07:29.317 START TEST nvmf_filesystem 00:07:29.317 ************************************ 00:07:29.317 02:27:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:29.317 * Looking for test storage... 00:07:29.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.317 02:27:02 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:29.317 02:27:02 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:29.317 02:27:02 -- common/autotest_common.sh@34 -- # set -e 00:07:29.317 02:27:02 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:29.317 02:27:02 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:29.317 02:27:02 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:29.317 02:27:02 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:29.317 02:27:02 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:29.317 02:27:02 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.317 02:27:02 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.317 02:27:02 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.317 02:27:02 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.317 02:27:02 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:29.317 02:27:02 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.317 02:27:02 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.317 02:27:02 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.317 02:27:02 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.317 02:27:02 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.317 02:27:02 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.317 02:27:02 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.317 02:27:02 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.317 02:27:02 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.317 02:27:02 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.317 02:27:02 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.317 02:27:02 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:29.317 02:27:02 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.317 02:27:02 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:29.317 02:27:02 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:29.317 02:27:02 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.317 02:27:02 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:29.317 02:27:02 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.317 02:27:02 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:29.317 02:27:02 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.317 02:27:02 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.317 02:27:02 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.317 02:27:02 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:29.317 02:27:02 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.317 02:27:02 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:29.317 02:27:02 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:29.317 02:27:02 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:29.317 02:27:02 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:29.317 02:27:02 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:29.317 02:27:02 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:29.317 02:27:02 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:29.317 02:27:02 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:29.317 02:27:02 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:29.317 02:27:02 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:29.317 02:27:02 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:29.317 02:27:02 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:29.317 02:27:02 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:29.317 02:27:02 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:29.317 02:27:02 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.317 02:27:02 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:29.317 02:27:02 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:29.317 02:27:02 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:29.317 02:27:02 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.317 02:27:02 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:29.317 02:27:02 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:29.317 02:27:02 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:29.317 02:27:02 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:29.317 02:27:02 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.317 02:27:02 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:29.317 02:27:02 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:29.317 02:27:02 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.317 02:27:02 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:29.317 02:27:02 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.317 02:27:02 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:29.318 02:27:02 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:29.318 02:27:02 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:29.318 02:27:02 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.318 02:27:02 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:29.318 02:27:02 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:29.318 02:27:02 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:29.318 02:27:02 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:29.318 02:27:02 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:29.318 02:27:02 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.318 02:27:02 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:29.318 02:27:02 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:29.318 02:27:02 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:29.318 02:27:02 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:29.318 02:27:02 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:29.318 02:27:02 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:29.318 02:27:02 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.318 02:27:02 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:29.318 02:27:02 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:29.318 02:27:02 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:29.318 02:27:02 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:29.318 02:27:02 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.318 02:27:02 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:29.318 02:27:02 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:29.318 02:27:02 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:29.318 02:27:02 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:29.318 02:27:02 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:29.318 02:27:02 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:29.318 02:27:02 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:29.318 02:27:02 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:29.318 02:27:02 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.318 02:27:02 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:29.318 02:27:02 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:29.318 02:27:02 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:29.318 02:27:02 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:29.318 02:27:02 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:29.318 02:27:02 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:29.318 02:27:02 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:29.318 02:27:02 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:29.318 02:27:02 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:29.318 #define SPDK_CONFIG_H 00:07:29.318 #define SPDK_CONFIG_APPS 1 00:07:29.318 #define SPDK_CONFIG_ARCH native 00:07:29.318 #undef SPDK_CONFIG_ASAN 00:07:29.318 #undef SPDK_CONFIG_AVAHI 00:07:29.318 #undef SPDK_CONFIG_CET 00:07:29.318 #define SPDK_CONFIG_COVERAGE 1 00:07:29.318 #define SPDK_CONFIG_CROSS_PREFIX 00:07:29.318 #undef SPDK_CONFIG_CRYPTO 00:07:29.318 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:29.318 #undef SPDK_CONFIG_CUSTOMOCF 00:07:29.318 #undef SPDK_CONFIG_DAOS 00:07:29.318 #define SPDK_CONFIG_DAOS_DIR 00:07:29.318 #define SPDK_CONFIG_DEBUG 1 00:07:29.318 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:29.318 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:29.318 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:29.318 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:29.318 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:29.318 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:29.318 #define SPDK_CONFIG_EXAMPLES 1 00:07:29.318 #undef SPDK_CONFIG_FC 00:07:29.318 #define SPDK_CONFIG_FC_PATH 00:07:29.318 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:29.318 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:29.318 #undef SPDK_CONFIG_FUSE 00:07:29.318 #undef SPDK_CONFIG_FUZZER 00:07:29.318 #define SPDK_CONFIG_FUZZER_LIB 00:07:29.318 #undef SPDK_CONFIG_GOLANG 00:07:29.318 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:29.318 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:29.318 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:29.318 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:29.318 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:29.318 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:29.318 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:29.318 #define SPDK_CONFIG_IDXD 1 00:07:29.318 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:29.318 #undef SPDK_CONFIG_IPSEC_MB 00:07:29.318 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:29.318 #define SPDK_CONFIG_ISAL 1 00:07:29.318 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:29.318 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:29.318 #define SPDK_CONFIG_LIBDIR 00:07:29.318 #undef SPDK_CONFIG_LTO 00:07:29.318 #define SPDK_CONFIG_MAX_LCORES 00:07:29.318 #define SPDK_CONFIG_NVME_CUSE 1 00:07:29.318 #undef SPDK_CONFIG_OCF 00:07:29.318 #define SPDK_CONFIG_OCF_PATH 00:07:29.318 #define SPDK_CONFIG_OPENSSL_PATH 00:07:29.318 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:29.318 #define SPDK_CONFIG_PGO_DIR 00:07:29.318 #undef SPDK_CONFIG_PGO_USE 00:07:29.318 #define SPDK_CONFIG_PREFIX /usr/local 00:07:29.318 #undef SPDK_CONFIG_RAID5F 00:07:29.318 #undef SPDK_CONFIG_RBD 00:07:29.318 #define SPDK_CONFIG_RDMA 1 00:07:29.318 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:29.318 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:29.318 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:29.318 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:29.318 #define SPDK_CONFIG_SHARED 1 00:07:29.318 #undef SPDK_CONFIG_SMA 00:07:29.318 #define SPDK_CONFIG_TESTS 1 00:07:29.318 #undef SPDK_CONFIG_TSAN 00:07:29.318 #define SPDK_CONFIG_UBLK 1 00:07:29.318 #define SPDK_CONFIG_UBSAN 1 00:07:29.318 #undef SPDK_CONFIG_UNIT_TESTS 00:07:29.318 #undef SPDK_CONFIG_URING 00:07:29.318 #define SPDK_CONFIG_URING_PATH 00:07:29.318 #undef SPDK_CONFIG_URING_ZNS 00:07:29.318 #undef SPDK_CONFIG_USDT 00:07:29.318 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:29.318 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:29.318 #define SPDK_CONFIG_VFIO_USER 1 00:07:29.318 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:29.318 #define SPDK_CONFIG_VHOST 1 00:07:29.318 #define SPDK_CONFIG_VIRTIO 1 00:07:29.318 #undef SPDK_CONFIG_VTUNE 00:07:29.318 #define SPDK_CONFIG_VTUNE_DIR 00:07:29.318 #define SPDK_CONFIG_WERROR 1 00:07:29.318 #define SPDK_CONFIG_WPDK_DIR 00:07:29.318 #undef SPDK_CONFIG_XNVME 00:07:29.318 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:29.318 02:27:02 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:29.318 02:27:02 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.318 02:27:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.318 02:27:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.318 02:27:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.318 02:27:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.318 02:27:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.318 02:27:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.318 02:27:02 -- paths/export.sh@5 -- # export PATH 00:07:29.318 02:27:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.318 02:27:02 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.318 02:27:02 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.318 02:27:02 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:29.318 02:27:02 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:29.318 02:27:02 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:29.318 02:27:02 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:29.318 02:27:02 -- pm/common@67 -- # TEST_TAG=N/A 00:07:29.318 02:27:02 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:29.318 02:27:02 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:29.318 02:27:02 -- pm/common@71 -- # uname -s 00:07:29.318 02:27:02 -- pm/common@71 -- # PM_OS=Linux 00:07:29.318 02:27:02 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:29.318 02:27:02 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:29.319 02:27:02 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:29.319 02:27:02 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:29.319 02:27:02 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:29.319 02:27:02 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:29.319 02:27:02 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:29.319 02:27:02 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:29.319 02:27:02 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:29.319 02:27:02 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:29.319 02:27:02 -- common/autotest_common.sh@57 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:29.319 02:27:02 -- common/autotest_common.sh@61 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:29.319 02:27:02 -- common/autotest_common.sh@63 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:29.319 02:27:02 -- common/autotest_common.sh@65 -- # : 1 00:07:29.319 02:27:02 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:29.319 02:27:02 -- common/autotest_common.sh@67 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:29.319 02:27:02 -- common/autotest_common.sh@69 -- # : 00:07:29.319 02:27:02 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:29.319 02:27:02 -- common/autotest_common.sh@71 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:29.319 02:27:02 -- common/autotest_common.sh@73 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:29.319 02:27:02 -- common/autotest_common.sh@75 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:29.319 02:27:02 -- common/autotest_common.sh@77 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:29.319 02:27:02 -- common/autotest_common.sh@79 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:29.319 02:27:02 -- common/autotest_common.sh@81 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:29.319 02:27:02 -- common/autotest_common.sh@83 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:29.319 02:27:02 -- common/autotest_common.sh@85 -- # : 1 00:07:29.319 02:27:02 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:29.319 02:27:02 -- common/autotest_common.sh@87 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:29.319 02:27:02 -- common/autotest_common.sh@89 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:29.319 02:27:02 -- common/autotest_common.sh@91 -- # : 1 00:07:29.319 02:27:02 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:29.319 02:27:02 -- common/autotest_common.sh@93 -- # : 1 00:07:29.319 02:27:02 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:29.319 02:27:02 -- common/autotest_common.sh@95 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:29.319 02:27:02 -- common/autotest_common.sh@97 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:29.319 02:27:02 -- common/autotest_common.sh@99 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:29.319 02:27:02 -- common/autotest_common.sh@101 -- # : tcp 00:07:29.319 02:27:02 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:29.319 02:27:02 -- common/autotest_common.sh@103 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:29.319 02:27:02 -- common/autotest_common.sh@105 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:29.319 02:27:02 -- common/autotest_common.sh@107 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:29.319 02:27:02 -- common/autotest_common.sh@109 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:29.319 02:27:02 -- common/autotest_common.sh@111 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:29.319 02:27:02 -- common/autotest_common.sh@113 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:29.319 02:27:02 -- common/autotest_common.sh@115 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:29.319 02:27:02 -- common/autotest_common.sh@117 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:29.319 02:27:02 -- common/autotest_common.sh@119 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:29.319 02:27:02 -- common/autotest_common.sh@121 -- # : 1 00:07:29.319 02:27:02 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:29.319 02:27:02 -- common/autotest_common.sh@123 -- # : 00:07:29.319 02:27:02 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:29.319 02:27:02 -- common/autotest_common.sh@125 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:29.319 02:27:02 -- common/autotest_common.sh@127 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:29.319 02:27:02 -- common/autotest_common.sh@129 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:29.319 02:27:02 -- common/autotest_common.sh@131 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:29.319 02:27:02 -- common/autotest_common.sh@133 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:29.319 02:27:02 -- common/autotest_common.sh@135 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:29.319 02:27:02 -- common/autotest_common.sh@137 -- # : 00:07:29.319 02:27:02 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:29.319 02:27:02 -- common/autotest_common.sh@139 -- # : true 00:07:29.319 02:27:02 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:29.319 02:27:02 -- common/autotest_common.sh@141 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:29.319 02:27:02 -- common/autotest_common.sh@143 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:29.319 02:27:02 -- common/autotest_common.sh@145 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:29.319 02:27:02 -- common/autotest_common.sh@147 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:29.319 02:27:02 -- common/autotest_common.sh@149 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:29.319 02:27:02 -- common/autotest_common.sh@151 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:29.319 02:27:02 -- common/autotest_common.sh@153 -- # : e810 00:07:29.319 02:27:02 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:29.319 02:27:02 -- common/autotest_common.sh@155 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:29.319 02:27:02 -- common/autotest_common.sh@157 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:29.319 02:27:02 -- common/autotest_common.sh@159 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:29.319 02:27:02 -- common/autotest_common.sh@161 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:29.319 02:27:02 -- common/autotest_common.sh@163 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:29.319 02:27:02 -- common/autotest_common.sh@166 -- # : 00:07:29.319 02:27:02 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:29.319 02:27:02 -- common/autotest_common.sh@168 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:29.319 02:27:02 -- common/autotest_common.sh@170 -- # : 0 00:07:29.319 02:27:02 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:29.319 02:27:02 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.319 02:27:02 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.319 02:27:02 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.319 02:27:02 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.320 02:27:02 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.320 02:27:02 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:29.320 02:27:02 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:29.320 02:27:02 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.320 02:27:02 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.320 02:27:02 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.320 02:27:02 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.320 02:27:02 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:29.320 02:27:02 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:29.320 02:27:02 -- common/autotest_common.sh@199 -- # cat 00:07:29.320 02:27:02 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:29.320 02:27:02 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.320 02:27:02 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.320 02:27:02 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.320 02:27:02 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.320 02:27:02 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:29.320 02:27:02 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:29.320 02:27:02 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:29.320 02:27:02 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:29.320 02:27:02 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:29.320 02:27:02 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:29.320 02:27:02 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.320 02:27:02 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.320 02:27:02 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.320 02:27:02 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.320 02:27:02 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.320 02:27:02 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.320 02:27:02 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.320 02:27:02 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.320 02:27:02 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:29.320 02:27:02 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:29.320 02:27:02 -- common/autotest_common.sh@252 -- # valgrind= 00:07:29.320 02:27:02 -- common/autotest_common.sh@258 -- # uname -s 00:07:29.320 02:27:02 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:29.320 02:27:02 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:29.320 02:27:02 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:29.320 02:27:02 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:29.320 02:27:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:29.320 02:27:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:29.320 02:27:02 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:29.320 02:27:02 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:29.320 02:27:02 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:29.320 02:27:02 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:29.320 02:27:02 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:29.320 02:27:02 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:29.320 02:27:02 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:29.320 02:27:02 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:29.320 02:27:02 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:29.320 02:27:02 -- common/autotest_common.sh@307 -- # [[ -z 4136634 ]] 00:07:29.320 02:27:02 -- common/autotest_common.sh@307 -- # kill -0 4136634 00:07:29.320 02:27:02 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:29.320 02:27:02 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:29.320 02:27:02 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:29.320 02:27:02 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:29.320 02:27:02 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:29.320 02:27:02 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:29.320 02:27:02 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:29.320 02:27:02 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:29.320 02:27:02 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.0g6QMG 00:07:29.320 02:27:02 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:29.320 02:27:02 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:29.320 02:27:02 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:29.320 02:27:02 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0g6QMG/tests/target /tmp/spdk.0g6QMG 00:07:29.320 02:27:02 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@316 -- # df -T 00:07:29.320 02:27:02 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=115642368000 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371037696 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=13728669696 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682905600 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685518848 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864511488 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874210816 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=9699328 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=234496 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=269312 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684453888 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685518848 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=1064960 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937097216 00:07:29.320 02:27:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937101312 00:07:29.320 02:27:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:29.320 02:27:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:29.320 02:27:02 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:29.320 * Looking for test storage... 00:07:29.320 02:27:02 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:29.320 02:27:02 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:29.320 02:27:02 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.320 02:27:02 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:29.320 02:27:02 -- common/autotest_common.sh@361 -- # mount=/ 00:07:29.320 02:27:02 -- common/autotest_common.sh@363 -- # target_space=115642368000 00:07:29.321 02:27:02 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:29.321 02:27:02 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:29.321 02:27:02 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:29.321 02:27:02 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:29.321 02:27:02 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:29.321 02:27:02 -- common/autotest_common.sh@370 -- # new_size=15943262208 00:07:29.321 02:27:02 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:29.321 02:27:02 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.321 02:27:02 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.321 02:27:02 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.321 02:27:02 -- common/autotest_common.sh@378 -- # return 0 00:07:29.321 02:27:02 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:29.321 02:27:02 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:29.321 02:27:02 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:29.321 02:27:02 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:29.321 02:27:02 -- common/autotest_common.sh@1673 -- # true 00:07:29.321 02:27:02 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:29.581 02:27:02 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:29.581 02:27:02 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:29.581 02:27:02 -- common/autotest_common.sh@27 -- # exec 00:07:29.581 02:27:02 -- common/autotest_common.sh@29 -- # exec 00:07:29.581 02:27:02 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:29.581 02:27:02 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:29.581 02:27:02 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:29.581 02:27:02 -- common/autotest_common.sh@18 -- # set -x 00:07:29.581 02:27:02 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.581 02:27:02 -- nvmf/common.sh@7 -- # uname -s 00:07:29.581 02:27:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.581 02:27:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.581 02:27:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.581 02:27:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.581 02:27:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.582 02:27:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.582 02:27:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.582 02:27:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.582 02:27:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.582 02:27:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.582 02:27:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.582 02:27:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.582 02:27:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.582 02:27:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.582 02:27:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.582 02:27:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.582 02:27:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.582 02:27:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.582 02:27:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.582 02:27:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.582 02:27:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.582 02:27:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.582 02:27:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.582 02:27:02 -- paths/export.sh@5 -- # export PATH 00:07:29.582 02:27:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.582 02:27:02 -- nvmf/common.sh@47 -- # : 0 00:07:29.582 02:27:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.582 02:27:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.582 02:27:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.582 02:27:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.582 02:27:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.582 02:27:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.582 02:27:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.582 02:27:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.582 02:27:02 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:29.582 02:27:02 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:29.582 02:27:02 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:29.582 02:27:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:29.582 02:27:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.582 02:27:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:29.582 02:27:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:29.582 02:27:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:29.582 02:27:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.582 02:27:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.582 02:27:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.582 02:27:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:29.582 02:27:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:29.582 02:27:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.582 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:36.169 02:27:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:36.169 02:27:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.169 02:27:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.169 02:27:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.169 02:27:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.169 02:27:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.169 02:27:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.169 02:27:09 -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.169 02:27:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.169 02:27:09 -- nvmf/common.sh@296 -- # e810=() 00:07:36.169 02:27:09 -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.169 02:27:09 -- nvmf/common.sh@297 -- # x722=() 00:07:36.169 02:27:09 -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.169 02:27:09 -- nvmf/common.sh@298 -- # mlx=() 00:07:36.169 02:27:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.169 02:27:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.169 02:27:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.169 02:27:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.169 02:27:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.169 02:27:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.169 02:27:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:36.169 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:36.169 02:27:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.169 02:27:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:36.169 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:36.169 02:27:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.169 02:27:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.169 02:27:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.169 02:27:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:36.169 02:27:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.169 02:27:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:36.169 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:36.169 02:27:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.169 02:27:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.169 02:27:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.169 02:27:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:36.169 02:27:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.169 02:27:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:36.169 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:36.169 02:27:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.169 02:27:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:36.169 02:27:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:36.169 02:27:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:36.169 02:27:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:36.169 02:27:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.169 02:27:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.169 02:27:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.169 02:27:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.169 02:27:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.169 02:27:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.169 02:27:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.169 02:27:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.169 02:27:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.169 02:27:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.169 02:27:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.169 02:27:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.169 02:27:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.169 02:27:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.169 02:27:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.169 02:27:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.169 02:27:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.169 02:27:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.169 02:27:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.169 02:27:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:07:36.169 00:07:36.169 --- 10.0.0.2 ping statistics --- 00:07:36.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.169 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:07:36.169 02:27:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:07:36.169 00:07:36.169 --- 10.0.0.1 ping statistics --- 00:07:36.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.169 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:07:36.169 02:27:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.170 02:27:09 -- nvmf/common.sh@411 -- # return 0 00:07:36.170 02:27:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:36.170 02:27:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.170 02:27:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:36.170 02:27:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:36.170 02:27:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.170 02:27:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:36.170 02:27:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:36.170 02:27:09 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:36.170 02:27:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:36.170 02:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.170 02:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:36.170 ************************************ 00:07:36.170 START TEST nvmf_filesystem_no_in_capsule 00:07:36.170 ************************************ 00:07:36.170 02:27:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:36.170 02:27:09 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:36.170 02:27:09 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.170 02:27:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:36.170 02:27:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:36.170 02:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:36.170 02:27:09 -- nvmf/common.sh@470 -- # nvmfpid=4140502 00:07:36.170 02:27:09 -- nvmf/common.sh@471 -- # waitforlisten 4140502 00:07:36.170 02:27:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.170 02:27:09 -- common/autotest_common.sh@817 -- # '[' -z 4140502 ']' 00:07:36.170 02:27:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.170 02:27:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:36.170 02:27:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.170 02:27:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:36.170 02:27:09 -- common/autotest_common.sh@10 -- # set +x 00:07:36.170 [2024-04-27 02:27:09.741301] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:36.170 [2024-04-27 02:27:09.741345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.170 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.430 [2024-04-27 02:27:09.805898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.430 [2024-04-27 02:27:09.871490] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.430 [2024-04-27 02:27:09.871528] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.430 [2024-04-27 02:27:09.871537] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.430 [2024-04-27 02:27:09.871545] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.430 [2024-04-27 02:27:09.871552] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.430 [2024-04-27 02:27:09.871681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.430 [2024-04-27 02:27:09.871777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.430 [2024-04-27 02:27:09.871904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.430 [2024-04-27 02:27:09.871908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.001 02:27:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.001 02:27:10 -- common/autotest_common.sh@850 -- # return 0 00:07:37.001 02:27:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:37.001 02:27:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:37.001 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.001 02:27:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.001 02:27:10 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.001 02:27:10 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:37.001 02:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.001 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.001 [2024-04-27 02:27:10.557864] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.001 02:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.001 02:27:10 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.001 02:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.001 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.262 Malloc1 00:07:37.262 02:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.262 02:27:10 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.262 02:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.262 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.262 02:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.262 02:27:10 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.262 02:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.262 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.262 02:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.262 02:27:10 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.262 02:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.262 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.262 [2024-04-27 02:27:10.689579] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.262 02:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.262 02:27:10 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.262 02:27:10 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:37.262 02:27:10 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:37.262 02:27:10 -- common/autotest_common.sh@1366 -- # local bs 00:07:37.262 02:27:10 -- common/autotest_common.sh@1367 -- # local nb 00:07:37.262 02:27:10 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.262 02:27:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.263 02:27:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.263 02:27:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.263 02:27:10 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:37.263 { 00:07:37.263 "name": "Malloc1", 00:07:37.263 "aliases": [ 00:07:37.263 "387f8dec-b81e-4c06-80c4-293844c39949" 00:07:37.263 ], 00:07:37.263 "product_name": "Malloc disk", 00:07:37.263 "block_size": 512, 00:07:37.263 "num_blocks": 1048576, 00:07:37.263 "uuid": "387f8dec-b81e-4c06-80c4-293844c39949", 00:07:37.263 "assigned_rate_limits": { 00:07:37.263 "rw_ios_per_sec": 0, 00:07:37.263 "rw_mbytes_per_sec": 0, 00:07:37.263 "r_mbytes_per_sec": 0, 00:07:37.263 "w_mbytes_per_sec": 0 00:07:37.263 }, 00:07:37.263 "claimed": true, 00:07:37.263 "claim_type": "exclusive_write", 00:07:37.263 "zoned": false, 00:07:37.263 "supported_io_types": { 00:07:37.263 "read": true, 00:07:37.263 "write": true, 00:07:37.263 "unmap": true, 00:07:37.263 "write_zeroes": true, 00:07:37.263 "flush": true, 00:07:37.263 "reset": true, 00:07:37.263 "compare": false, 00:07:37.263 "compare_and_write": false, 00:07:37.263 "abort": true, 00:07:37.263 "nvme_admin": false, 00:07:37.263 "nvme_io": false 00:07:37.263 }, 00:07:37.263 "memory_domains": [ 00:07:37.263 { 00:07:37.263 "dma_device_id": "system", 00:07:37.263 "dma_device_type": 1 00:07:37.263 }, 00:07:37.263 { 00:07:37.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.263 "dma_device_type": 2 00:07:37.263 } 00:07:37.263 ], 00:07:37.263 "driver_specific": {} 00:07:37.263 } 00:07:37.263 ]' 00:07:37.263 02:27:10 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:37.263 02:27:10 -- common/autotest_common.sh@1369 -- # bs=512 00:07:37.263 02:27:10 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:37.263 02:27:10 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:37.263 02:27:10 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:37.263 02:27:10 -- common/autotest_common.sh@1374 -- # echo 512 00:07:37.263 02:27:10 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.263 02:27:10 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:39.175 02:27:12 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.175 02:27:12 -- common/autotest_common.sh@1184 -- # local i=0 00:07:39.175 02:27:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.175 02:27:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:39.175 02:27:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:41.088 02:27:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:41.088 02:27:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:41.088 02:27:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.088 02:27:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:41.088 02:27:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.088 02:27:14 -- common/autotest_common.sh@1194 -- # return 0 00:07:41.088 02:27:14 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:41.088 02:27:14 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:41.088 02:27:14 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:41.088 02:27:14 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:41.088 02:27:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:41.088 02:27:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:41.088 02:27:14 -- setup/common.sh@80 -- # echo 536870912 00:07:41.088 02:27:14 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:41.088 02:27:14 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:41.088 02:27:14 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:41.088 02:27:14 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:41.088 02:27:14 -- target/filesystem.sh@69 -- # partprobe 00:07:41.659 02:27:15 -- target/filesystem.sh@70 -- # sleep 1 00:07:42.603 02:27:16 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:42.603 02:27:16 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:42.603 02:27:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:42.603 02:27:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.603 02:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:42.865 ************************************ 00:07:42.865 START TEST filesystem_ext4 00:07:42.865 ************************************ 00:07:42.865 02:27:16 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:42.865 02:27:16 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:42.865 02:27:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.865 02:27:16 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:42.865 02:27:16 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:42.865 02:27:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:42.865 02:27:16 -- common/autotest_common.sh@914 -- # local i=0 00:07:42.865 02:27:16 -- common/autotest_common.sh@915 -- # local force 00:07:42.865 02:27:16 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:42.865 02:27:16 -- common/autotest_common.sh@918 -- # force=-F 00:07:42.865 02:27:16 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:42.865 mke2fs 1.46.5 (30-Dec-2021) 00:07:42.865 Discarding device blocks: 0/522240 done 00:07:42.865 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:42.865 Filesystem UUID: ce97a50a-b3ba-4a1e-8bcb-96983d1aa6f6 00:07:42.865 Superblock backups stored on blocks: 00:07:42.865 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:42.865 00:07:42.865 Allocating group tables: 0/64 done 00:07:42.865 Writing inode tables: 0/64 done 00:07:46.168 Creating journal (8192 blocks): done 00:07:46.947 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:07:46.947 00:07:46.947 02:27:20 -- common/autotest_common.sh@931 -- # return 0 00:07:46.947 02:27:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.888 02:27:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.888 02:27:21 -- target/filesystem.sh@25 -- # sync 00:07:47.888 02:27:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.888 02:27:21 -- target/filesystem.sh@27 -- # sync 00:07:47.888 02:27:21 -- target/filesystem.sh@29 -- # i=0 00:07:47.888 02:27:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.888 02:27:21 -- target/filesystem.sh@37 -- # kill -0 4140502 00:07:47.888 02:27:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.888 02:27:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.888 02:27:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.888 02:27:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.888 00:07:47.888 real 0m4.963s 00:07:47.888 user 0m0.025s 00:07:47.888 sys 0m0.051s 00:07:47.888 02:27:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.888 02:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:47.888 ************************************ 00:07:47.888 END TEST filesystem_ext4 00:07:47.888 ************************************ 00:07:47.888 02:27:21 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.888 02:27:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:47.888 02:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.888 02:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:47.888 ************************************ 00:07:47.888 START TEST filesystem_btrfs 00:07:47.888 ************************************ 00:07:47.888 02:27:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.888 02:27:21 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.888 02:27:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.888 02:27:21 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.888 02:27:21 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:47.888 02:27:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:47.888 02:27:21 -- common/autotest_common.sh@914 -- # local i=0 00:07:47.888 02:27:21 -- common/autotest_common.sh@915 -- # local force 00:07:47.888 02:27:21 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:47.888 02:27:21 -- common/autotest_common.sh@920 -- # force=-f 00:07:47.888 02:27:21 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.460 btrfs-progs v6.6.2 00:07:48.460 See https://btrfs.readthedocs.io for more information. 00:07:48.460 00:07:48.460 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.460 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.460 this does not affect your deployments: 00:07:48.460 - DUP for metadata (-m dup) 00:07:48.460 - enabled no-holes (-O no-holes) 00:07:48.460 - enabled free-space-tree (-R free-space-tree) 00:07:48.460 00:07:48.460 Label: (null) 00:07:48.460 UUID: 7dd7b5d8-2ea8-4344-b910-39b17f07ec4f 00:07:48.460 Node size: 16384 00:07:48.460 Sector size: 4096 00:07:48.460 Filesystem size: 510.00MiB 00:07:48.460 Block group profiles: 00:07:48.460 Data: single 8.00MiB 00:07:48.460 Metadata: DUP 32.00MiB 00:07:48.460 System: DUP 8.00MiB 00:07:48.460 SSD detected: yes 00:07:48.460 Zoned device: no 00:07:48.460 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.460 Runtime features: free-space-tree 00:07:48.460 Checksum: crc32c 00:07:48.460 Number of devices: 1 00:07:48.460 Devices: 00:07:48.460 ID SIZE PATH 00:07:48.460 1 510.00MiB /dev/nvme0n1p1 00:07:48.460 00:07:48.460 02:27:21 -- common/autotest_common.sh@931 -- # return 0 00:07:48.460 02:27:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.033 02:27:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.033 02:27:22 -- target/filesystem.sh@25 -- # sync 00:07:49.033 02:27:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.033 02:27:22 -- target/filesystem.sh@27 -- # sync 00:07:49.033 02:27:22 -- target/filesystem.sh@29 -- # i=0 00:07:49.033 02:27:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.033 02:27:22 -- target/filesystem.sh@37 -- # kill -0 4140502 00:07:49.033 02:27:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.033 02:27:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.033 02:27:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.033 02:27:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.033 00:07:49.033 real 0m1.090s 00:07:49.033 user 0m0.023s 00:07:49.033 sys 0m0.068s 00:07:49.033 02:27:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.033 02:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.033 ************************************ 00:07:49.033 END TEST filesystem_btrfs 00:07:49.033 ************************************ 00:07:49.033 02:27:22 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:49.033 02:27:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.033 02:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.033 02:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.293 ************************************ 00:07:49.293 START TEST filesystem_xfs 00:07:49.293 ************************************ 00:07:49.293 02:27:22 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:49.293 02:27:22 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:49.293 02:27:22 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.293 02:27:22 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:49.293 02:27:22 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:49.293 02:27:22 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.293 02:27:22 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.293 02:27:22 -- common/autotest_common.sh@915 -- # local force 00:07:49.293 02:27:22 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:49.293 02:27:22 -- common/autotest_common.sh@920 -- # force=-f 00:07:49.293 02:27:22 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:49.293 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:49.293 = sectsz=512 attr=2, projid32bit=1 00:07:49.293 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:49.294 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:49.294 data = bsize=4096 blocks=130560, imaxpct=25 00:07:49.294 = sunit=0 swidth=0 blks 00:07:49.294 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:49.294 log =internal log bsize=4096 blocks=16384, version=2 00:07:49.294 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:49.294 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:50.234 Discarding blocks...Done. 00:07:50.234 02:27:23 -- common/autotest_common.sh@931 -- # return 0 00:07:50.234 02:27:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.780 02:27:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.780 02:27:26 -- target/filesystem.sh@25 -- # sync 00:07:52.780 02:27:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.780 02:27:26 -- target/filesystem.sh@27 -- # sync 00:07:52.780 02:27:26 -- target/filesystem.sh@29 -- # i=0 00:07:52.780 02:27:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.780 02:27:26 -- target/filesystem.sh@37 -- # kill -0 4140502 00:07:52.780 02:27:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.780 02:27:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.780 02:27:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.780 02:27:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.780 00:07:52.780 real 0m3.470s 00:07:52.780 user 0m0.021s 00:07:52.780 sys 0m0.057s 00:07:52.780 02:27:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.780 02:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:52.780 ************************************ 00:07:52.780 END TEST filesystem_xfs 00:07:52.780 ************************************ 00:07:52.780 02:27:26 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:52.780 02:27:26 -- target/filesystem.sh@93 -- # sync 00:07:52.780 02:27:26 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.780 02:27:26 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.780 02:27:26 -- common/autotest_common.sh@1205 -- # local i=0 00:07:52.781 02:27:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:52.781 02:27:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.041 02:27:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:53.041 02:27:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.041 02:27:26 -- common/autotest_common.sh@1217 -- # return 0 00:07:53.041 02:27:26 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.041 02:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.041 02:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.041 02:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.041 02:27:26 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:53.041 02:27:26 -- target/filesystem.sh@101 -- # killprocess 4140502 00:07:53.041 02:27:26 -- common/autotest_common.sh@936 -- # '[' -z 4140502 ']' 00:07:53.041 02:27:26 -- common/autotest_common.sh@940 -- # kill -0 4140502 00:07:53.041 02:27:26 -- common/autotest_common.sh@941 -- # uname 00:07:53.041 02:27:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.041 02:27:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4140502 00:07:53.041 02:27:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.041 02:27:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.041 02:27:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4140502' 00:07:53.041 killing process with pid 4140502 00:07:53.041 02:27:26 -- common/autotest_common.sh@955 -- # kill 4140502 00:07:53.041 02:27:26 -- common/autotest_common.sh@960 -- # wait 4140502 00:07:53.302 02:27:26 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:53.302 00:07:53.302 real 0m17.046s 00:07:53.302 user 1m7.443s 00:07:53.302 sys 0m1.309s 00:07:53.302 02:27:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.302 02:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.302 ************************************ 00:07:53.302 END TEST nvmf_filesystem_no_in_capsule 00:07:53.302 ************************************ 00:07:53.302 02:27:26 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:53.302 02:27:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.302 02:27:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.302 02:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.302 ************************************ 00:07:53.302 START TEST nvmf_filesystem_in_capsule 00:07:53.302 ************************************ 00:07:53.302 02:27:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:53.302 02:27:26 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:53.302 02:27:26 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:53.302 02:27:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:53.302 02:27:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:53.302 02:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.302 02:27:26 -- nvmf/common.sh@470 -- # nvmfpid=4144350 00:07:53.302 02:27:26 -- nvmf/common.sh@471 -- # waitforlisten 4144350 00:07:53.302 02:27:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.302 02:27:26 -- common/autotest_common.sh@817 -- # '[' -z 4144350 ']' 00:07:53.302 02:27:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.302 02:27:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:53.302 02:27:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.302 02:27:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:53.302 02:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.563 [2024-04-27 02:27:26.966284] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:07:53.563 [2024-04-27 02:27:26.966329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.563 [2024-04-27 02:27:27.030837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.563 [2024-04-27 02:27:27.094319] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.563 [2024-04-27 02:27:27.094354] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.563 [2024-04-27 02:27:27.094363] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.563 [2024-04-27 02:27:27.094370] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.563 [2024-04-27 02:27:27.094377] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.563 [2024-04-27 02:27:27.094502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.563 [2024-04-27 02:27:27.094622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.563 [2024-04-27 02:27:27.094757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.563 [2024-04-27 02:27:27.094760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.134 02:27:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:54.134 02:27:27 -- common/autotest_common.sh@850 -- # return 0 00:07:54.134 02:27:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:54.134 02:27:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:54.134 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 02:27:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.395 02:27:27 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:54.395 02:27:27 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:54.395 02:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.395 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 [2024-04-27 02:27:27.791889] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.395 02:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.395 02:27:27 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:54.395 02:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.395 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 Malloc1 00:07:54.395 02:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.395 02:27:27 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:54.395 02:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.395 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 02:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.395 02:27:27 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:54.395 02:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.395 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 02:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.395 02:27:27 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.395 02:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.395 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 [2024-04-27 02:27:27.922332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.395 02:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.395 02:27:27 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:54.395 02:27:27 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:54.395 02:27:27 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:54.395 02:27:27 -- common/autotest_common.sh@1366 -- # local bs 00:07:54.395 02:27:27 -- common/autotest_common.sh@1367 -- # local nb 00:07:54.395 02:27:27 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:54.395 02:27:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.395 02:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.395 02:27:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.395 02:27:27 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:54.395 { 00:07:54.395 "name": "Malloc1", 00:07:54.395 "aliases": [ 00:07:54.395 "2e4085f6-4e61-4dab-98a3-7862785e9cdf" 00:07:54.395 ], 00:07:54.395 "product_name": "Malloc disk", 00:07:54.395 "block_size": 512, 00:07:54.395 "num_blocks": 1048576, 00:07:54.395 "uuid": "2e4085f6-4e61-4dab-98a3-7862785e9cdf", 00:07:54.395 "assigned_rate_limits": { 00:07:54.395 "rw_ios_per_sec": 0, 00:07:54.395 "rw_mbytes_per_sec": 0, 00:07:54.395 "r_mbytes_per_sec": 0, 00:07:54.395 "w_mbytes_per_sec": 0 00:07:54.395 }, 00:07:54.395 "claimed": true, 00:07:54.395 "claim_type": "exclusive_write", 00:07:54.395 "zoned": false, 00:07:54.395 "supported_io_types": { 00:07:54.395 "read": true, 00:07:54.395 "write": true, 00:07:54.395 "unmap": true, 00:07:54.395 "write_zeroes": true, 00:07:54.395 "flush": true, 00:07:54.395 "reset": true, 00:07:54.395 "compare": false, 00:07:54.395 "compare_and_write": false, 00:07:54.395 "abort": true, 00:07:54.395 "nvme_admin": false, 00:07:54.395 "nvme_io": false 00:07:54.395 }, 00:07:54.395 "memory_domains": [ 00:07:54.395 { 00:07:54.395 "dma_device_id": "system", 00:07:54.395 "dma_device_type": 1 00:07:54.395 }, 00:07:54.395 { 00:07:54.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.395 "dma_device_type": 2 00:07:54.395 } 00:07:54.395 ], 00:07:54.395 "driver_specific": {} 00:07:54.395 } 00:07:54.395 ]' 00:07:54.395 02:27:27 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:54.395 02:27:27 -- common/autotest_common.sh@1369 -- # bs=512 00:07:54.395 02:27:27 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:54.656 02:27:28 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:54.656 02:27:28 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:54.656 02:27:28 -- common/autotest_common.sh@1374 -- # echo 512 00:07:54.656 02:27:28 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:54.656 02:27:28 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.041 02:27:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.041 02:27:29 -- common/autotest_common.sh@1184 -- # local i=0 00:07:56.041 02:27:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.041 02:27:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:56.041 02:27:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:57.956 02:27:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:57.956 02:27:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:57.956 02:27:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.956 02:27:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:57.956 02:27:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.956 02:27:31 -- common/autotest_common.sh@1194 -- # return 0 00:07:57.956 02:27:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.956 02:27:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.956 02:27:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.956 02:27:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.956 02:27:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.956 02:27:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.956 02:27:31 -- setup/common.sh@80 -- # echo 536870912 00:07:57.956 02:27:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.956 02:27:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.956 02:27:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.956 02:27:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:58.216 02:27:31 -- target/filesystem.sh@69 -- # partprobe 00:07:58.477 02:27:32 -- target/filesystem.sh@70 -- # sleep 1 00:07:59.861 02:27:33 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:59.861 02:27:33 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:59.861 02:27:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:59.861 02:27:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.861 02:27:33 -- common/autotest_common.sh@10 -- # set +x 00:07:59.861 ************************************ 00:07:59.861 START TEST filesystem_in_capsule_ext4 00:07:59.861 ************************************ 00:07:59.861 02:27:33 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:59.861 02:27:33 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:59.861 02:27:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.861 02:27:33 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:59.861 02:27:33 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:59.861 02:27:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:59.861 02:27:33 -- common/autotest_common.sh@914 -- # local i=0 00:07:59.861 02:27:33 -- common/autotest_common.sh@915 -- # local force 00:07:59.861 02:27:33 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:59.861 02:27:33 -- common/autotest_common.sh@918 -- # force=-F 00:07:59.861 02:27:33 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:59.861 mke2fs 1.46.5 (30-Dec-2021) 00:07:59.861 Discarding device blocks: 0/522240 done 00:07:59.862 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.862 Filesystem UUID: 32601fcf-2f5f-4abb-b3b1-60a84c44edcb 00:07:59.862 Superblock backups stored on blocks: 00:07:59.862 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.862 00:07:59.862 Allocating group tables: 0/64 done 00:07:59.862 Writing inode tables: 0/64 done 00:08:03.243 Creating journal (8192 blocks): done 00:08:03.243 Writing superblocks and filesystem accounting information: 0/64 done 00:08:03.243 00:08:03.243 02:27:36 -- common/autotest_common.sh@931 -- # return 0 00:08:03.243 02:27:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.243 02:27:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.243 02:27:36 -- target/filesystem.sh@25 -- # sync 00:08:03.243 02:27:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.243 02:27:36 -- target/filesystem.sh@27 -- # sync 00:08:03.243 02:27:36 -- target/filesystem.sh@29 -- # i=0 00:08:03.243 02:27:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.243 02:27:36 -- target/filesystem.sh@37 -- # kill -0 4144350 00:08:03.243 02:27:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.243 02:27:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.243 02:27:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.243 02:27:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.243 00:08:03.243 real 0m3.310s 00:08:03.243 user 0m0.023s 00:08:03.243 sys 0m0.052s 00:08:03.243 02:27:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.243 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:08:03.243 ************************************ 00:08:03.243 END TEST filesystem_in_capsule_ext4 00:08:03.243 ************************************ 00:08:03.243 02:27:36 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:03.243 02:27:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:03.243 02:27:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.243 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:08:03.243 ************************************ 00:08:03.243 START TEST filesystem_in_capsule_btrfs 00:08:03.243 ************************************ 00:08:03.243 02:27:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:03.243 02:27:36 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:03.243 02:27:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.243 02:27:36 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:03.243 02:27:36 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:03.243 02:27:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:03.243 02:27:36 -- common/autotest_common.sh@914 -- # local i=0 00:08:03.243 02:27:36 -- common/autotest_common.sh@915 -- # local force 00:08:03.243 02:27:36 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:03.243 02:27:36 -- common/autotest_common.sh@920 -- # force=-f 00:08:03.243 02:27:36 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:03.505 btrfs-progs v6.6.2 00:08:03.505 See https://btrfs.readthedocs.io for more information. 00:08:03.505 00:08:03.505 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:03.505 NOTE: several default settings have changed in version 5.15, please make sure 00:08:03.505 this does not affect your deployments: 00:08:03.505 - DUP for metadata (-m dup) 00:08:03.505 - enabled no-holes (-O no-holes) 00:08:03.505 - enabled free-space-tree (-R free-space-tree) 00:08:03.505 00:08:03.505 Label: (null) 00:08:03.505 UUID: 58b5a9ba-6c5e-4de8-98b5-bba2dd53855f 00:08:03.505 Node size: 16384 00:08:03.505 Sector size: 4096 00:08:03.505 Filesystem size: 510.00MiB 00:08:03.505 Block group profiles: 00:08:03.505 Data: single 8.00MiB 00:08:03.505 Metadata: DUP 32.00MiB 00:08:03.505 System: DUP 8.00MiB 00:08:03.505 SSD detected: yes 00:08:03.505 Zoned device: no 00:08:03.505 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:03.505 Runtime features: free-space-tree 00:08:03.505 Checksum: crc32c 00:08:03.505 Number of devices: 1 00:08:03.505 Devices: 00:08:03.505 ID SIZE PATH 00:08:03.505 1 510.00MiB /dev/nvme0n1p1 00:08:03.505 00:08:03.505 02:27:37 -- common/autotest_common.sh@931 -- # return 0 00:08:03.505 02:27:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.079 02:27:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.079 02:27:37 -- target/filesystem.sh@25 -- # sync 00:08:04.079 02:27:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.079 02:27:37 -- target/filesystem.sh@27 -- # sync 00:08:04.079 02:27:37 -- target/filesystem.sh@29 -- # i=0 00:08:04.079 02:27:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.079 02:27:37 -- target/filesystem.sh@37 -- # kill -0 4144350 00:08:04.079 02:27:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.079 02:27:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.079 02:27:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.079 02:27:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.079 00:08:04.079 real 0m0.915s 00:08:04.079 user 0m0.027s 00:08:04.079 sys 0m0.063s 00:08:04.079 02:27:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.079 02:27:37 -- common/autotest_common.sh@10 -- # set +x 00:08:04.079 ************************************ 00:08:04.079 END TEST filesystem_in_capsule_btrfs 00:08:04.079 ************************************ 00:08:04.079 02:27:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.079 02:27:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:04.079 02:27:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.079 02:27:37 -- common/autotest_common.sh@10 -- # set +x 00:08:04.341 ************************************ 00:08:04.341 START TEST filesystem_in_capsule_xfs 00:08:04.341 ************************************ 00:08:04.341 02:27:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.341 02:27:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.341 02:27:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.341 02:27:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.342 02:27:37 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:04.342 02:27:37 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:04.342 02:27:37 -- common/autotest_common.sh@914 -- # local i=0 00:08:04.342 02:27:37 -- common/autotest_common.sh@915 -- # local force 00:08:04.342 02:27:37 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:04.342 02:27:37 -- common/autotest_common.sh@920 -- # force=-f 00:08:04.342 02:27:37 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.342 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.342 = sectsz=512 attr=2, projid32bit=1 00:08:04.342 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.342 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.342 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.342 = sunit=0 swidth=0 blks 00:08:04.342 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.342 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.342 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.342 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.730 Discarding blocks...Done. 00:08:05.730 02:27:38 -- common/autotest_common.sh@931 -- # return 0 00:08:05.730 02:27:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.645 02:27:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.645 02:27:41 -- target/filesystem.sh@25 -- # sync 00:08:07.645 02:27:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.645 02:27:41 -- target/filesystem.sh@27 -- # sync 00:08:07.645 02:27:41 -- target/filesystem.sh@29 -- # i=0 00:08:07.645 02:27:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.645 02:27:41 -- target/filesystem.sh@37 -- # kill -0 4144350 00:08:07.645 02:27:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.645 02:27:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.645 02:27:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.645 02:27:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.645 00:08:07.645 real 0m3.368s 00:08:07.645 user 0m0.029s 00:08:07.645 sys 0m0.050s 00:08:07.645 02:27:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:07.645 02:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:07.645 ************************************ 00:08:07.645 END TEST filesystem_in_capsule_xfs 00:08:07.645 ************************************ 00:08:07.645 02:27:41 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:07.906 02:27:41 -- target/filesystem.sh@93 -- # sync 00:08:07.906 02:27:41 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.168 02:27:41 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.168 02:27:41 -- common/autotest_common.sh@1205 -- # local i=0 00:08:08.168 02:27:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:08.168 02:27:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.168 02:27:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:08.168 02:27:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.168 02:27:41 -- common/autotest_common.sh@1217 -- # return 0 00:08:08.168 02:27:41 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.168 02:27:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:08.168 02:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.168 02:27:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:08.168 02:27:41 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:08.168 02:27:41 -- target/filesystem.sh@101 -- # killprocess 4144350 00:08:08.168 02:27:41 -- common/autotest_common.sh@936 -- # '[' -z 4144350 ']' 00:08:08.168 02:27:41 -- common/autotest_common.sh@940 -- # kill -0 4144350 00:08:08.168 02:27:41 -- common/autotest_common.sh@941 -- # uname 00:08:08.168 02:27:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:08.168 02:27:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4144350 00:08:08.168 02:27:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:08.168 02:27:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:08.168 02:27:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4144350' 00:08:08.168 killing process with pid 4144350 00:08:08.168 02:27:41 -- common/autotest_common.sh@955 -- # kill 4144350 00:08:08.168 02:27:41 -- common/autotest_common.sh@960 -- # wait 4144350 00:08:08.428 02:27:41 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:08.428 00:08:08.428 real 0m15.044s 00:08:08.428 user 0m59.497s 00:08:08.428 sys 0m1.285s 00:08:08.428 02:27:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.428 02:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.428 ************************************ 00:08:08.428 END TEST nvmf_filesystem_in_capsule 00:08:08.428 ************************************ 00:08:08.428 02:27:41 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:08.428 02:27:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:08.428 02:27:41 -- nvmf/common.sh@117 -- # sync 00:08:08.428 02:27:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.428 02:27:41 -- nvmf/common.sh@120 -- # set +e 00:08:08.428 02:27:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.428 02:27:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.428 rmmod nvme_tcp 00:08:08.428 rmmod nvme_fabrics 00:08:08.428 rmmod nvme_keyring 00:08:08.428 02:27:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.428 02:27:42 -- nvmf/common.sh@124 -- # set -e 00:08:08.428 02:27:42 -- nvmf/common.sh@125 -- # return 0 00:08:08.428 02:27:42 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:08.428 02:27:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:08.428 02:27:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:08.428 02:27:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:08.428 02:27:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.428 02:27:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.428 02:27:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.429 02:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.429 02:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.975 02:27:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.975 00:08:10.975 real 0m41.443s 00:08:10.975 user 2m9.093s 00:08:10.975 sys 0m7.678s 00:08:10.975 02:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.975 02:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:10.975 ************************************ 00:08:10.975 END TEST nvmf_filesystem 00:08:10.975 ************************************ 00:08:10.975 02:27:44 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.975 02:27:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:10.975 02:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.975 02:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:10.975 ************************************ 00:08:10.975 START TEST nvmf_discovery 00:08:10.975 ************************************ 00:08:10.975 02:27:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.975 * Looking for test storage... 00:08:10.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.975 02:27:44 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.975 02:27:44 -- nvmf/common.sh@7 -- # uname -s 00:08:10.975 02:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.975 02:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.975 02:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.975 02:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.975 02:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.975 02:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.975 02:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.975 02:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.975 02:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.975 02:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.975 02:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:10.975 02:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:10.975 02:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.975 02:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.975 02:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.975 02:27:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.975 02:27:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.975 02:27:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.975 02:27:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.975 02:27:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.976 02:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.976 02:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.976 02:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.976 02:27:44 -- paths/export.sh@5 -- # export PATH 00:08:10.976 02:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.976 02:27:44 -- nvmf/common.sh@47 -- # : 0 00:08:10.976 02:27:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.976 02:27:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.976 02:27:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.976 02:27:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.976 02:27:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.976 02:27:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.976 02:27:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.976 02:27:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.976 02:27:44 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:10.976 02:27:44 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:10.976 02:27:44 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:10.976 02:27:44 -- target/discovery.sh@15 -- # hash nvme 00:08:10.976 02:27:44 -- target/discovery.sh@20 -- # nvmftestinit 00:08:10.976 02:27:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:10.976 02:27:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.976 02:27:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:10.976 02:27:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:10.976 02:27:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:10.976 02:27:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.976 02:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.976 02:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.976 02:27:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:10.976 02:27:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:10.976 02:27:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.976 02:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:17.567 02:27:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:17.567 02:27:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.567 02:27:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.567 02:27:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.567 02:27:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.567 02:27:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.567 02:27:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.567 02:27:51 -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.567 02:27:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.567 02:27:51 -- nvmf/common.sh@296 -- # e810=() 00:08:17.567 02:27:51 -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.567 02:27:51 -- nvmf/common.sh@297 -- # x722=() 00:08:17.567 02:27:51 -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.567 02:27:51 -- nvmf/common.sh@298 -- # mlx=() 00:08:17.567 02:27:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.567 02:27:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.567 02:27:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.567 02:27:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.567 02:27:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.567 02:27:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.567 02:27:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.567 02:27:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.567 02:27:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.567 02:27:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.567 02:27:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.567 02:27:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.567 02:27:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.567 02:27:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.567 02:27:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.567 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.567 02:27:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.567 02:27:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.567 02:27:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.567 02:27:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.567 02:27:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.567 02:27:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.567 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.567 02:27:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.567 02:27:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:17.567 02:27:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:17.567 02:27:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:17.567 02:27:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:17.567 02:27:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.567 02:27:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.567 02:27:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.567 02:27:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.567 02:27:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.567 02:27:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.567 02:27:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.567 02:27:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.567 02:27:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.567 02:27:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.567 02:27:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.567 02:27:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.567 02:27:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.829 02:27:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.829 02:27:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.829 02:27:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.829 02:27:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.829 02:27:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.829 02:27:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.829 02:27:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:08:17.829 00:08:17.829 --- 10.0.0.2 ping statistics --- 00:08:17.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.829 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:08:17.829 02:27:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:08:17.829 00:08:17.829 --- 10.0.0.1 ping statistics --- 00:08:17.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.829 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:08:17.829 02:27:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.829 02:27:51 -- nvmf/common.sh@411 -- # return 0 00:08:17.829 02:27:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:17.829 02:27:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.829 02:27:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:17.829 02:27:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:17.829 02:27:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.829 02:27:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:17.829 02:27:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:17.829 02:27:51 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:17.829 02:27:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:17.829 02:27:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:17.829 02:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:17.829 02:27:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.829 02:27:51 -- nvmf/common.sh@470 -- # nvmfpid=4151716 00:08:17.829 02:27:51 -- nvmf/common.sh@471 -- # waitforlisten 4151716 00:08:17.829 02:27:51 -- common/autotest_common.sh@817 -- # '[' -z 4151716 ']' 00:08:17.829 02:27:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.829 02:27:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:17.829 02:27:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.829 02:27:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:17.829 02:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.090 [2024-04-27 02:27:51.454635] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:08:18.090 [2024-04-27 02:27:51.454693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.090 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.090 [2024-04-27 02:27:51.519018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.090 [2024-04-27 02:27:51.587959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.090 [2024-04-27 02:27:51.587995] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.090 [2024-04-27 02:27:51.588004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.090 [2024-04-27 02:27:51.588012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.090 [2024-04-27 02:27:51.588020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.090 [2024-04-27 02:27:51.588184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.090 [2024-04-27 02:27:51.588318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.090 [2024-04-27 02:27:51.588418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.090 [2024-04-27 02:27:51.588421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.661 02:27:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:18.661 02:27:52 -- common/autotest_common.sh@850 -- # return 0 00:08:18.661 02:27:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:18.661 02:27:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:18.661 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.922 02:27:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.922 02:27:52 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.922 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 [2024-04-27 02:27:52.294948] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@26 -- # seq 1 4 00:08:18.923 02:27:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.923 02:27:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 Null1 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 [2024-04-27 02:27:52.355273] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.923 02:27:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 Null2 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.923 02:27:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 Null3 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.923 02:27:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 Null4 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.923 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:18.923 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:18.923 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.923 02:27:52 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:19.184 00:08:19.184 Discovery Log Number of Records 6, Generation counter 6 00:08:19.184 =====Discovery Log Entry 0====== 00:08:19.184 trtype: tcp 00:08:19.184 adrfam: ipv4 00:08:19.184 subtype: current discovery subsystem 00:08:19.184 treq: not required 00:08:19.184 portid: 0 00:08:19.184 trsvcid: 4420 00:08:19.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:19.184 traddr: 10.0.0.2 00:08:19.184 eflags: explicit discovery connections, duplicate discovery information 00:08:19.184 sectype: none 00:08:19.184 =====Discovery Log Entry 1====== 00:08:19.184 trtype: tcp 00:08:19.184 adrfam: ipv4 00:08:19.184 subtype: nvme subsystem 00:08:19.184 treq: not required 00:08:19.184 portid: 0 00:08:19.184 trsvcid: 4420 00:08:19.184 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:19.184 traddr: 10.0.0.2 00:08:19.184 eflags: none 00:08:19.184 sectype: none 00:08:19.184 =====Discovery Log Entry 2====== 00:08:19.184 trtype: tcp 00:08:19.184 adrfam: ipv4 00:08:19.184 subtype: nvme subsystem 00:08:19.184 treq: not required 00:08:19.184 portid: 0 00:08:19.184 trsvcid: 4420 00:08:19.184 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:19.184 traddr: 10.0.0.2 00:08:19.184 eflags: none 00:08:19.184 sectype: none 00:08:19.184 =====Discovery Log Entry 3====== 00:08:19.184 trtype: tcp 00:08:19.184 adrfam: ipv4 00:08:19.184 subtype: nvme subsystem 00:08:19.184 treq: not required 00:08:19.184 portid: 0 00:08:19.184 trsvcid: 4420 00:08:19.184 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:19.184 traddr: 10.0.0.2 00:08:19.184 eflags: none 00:08:19.184 sectype: none 00:08:19.184 =====Discovery Log Entry 4====== 00:08:19.184 trtype: tcp 00:08:19.184 adrfam: ipv4 00:08:19.184 subtype: nvme subsystem 00:08:19.184 treq: not required 00:08:19.184 portid: 0 00:08:19.184 trsvcid: 4420 00:08:19.184 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:19.184 traddr: 10.0.0.2 00:08:19.184 eflags: none 00:08:19.184 sectype: none 00:08:19.184 =====Discovery Log Entry 5====== 00:08:19.184 trtype: tcp 00:08:19.184 adrfam: ipv4 00:08:19.184 subtype: discovery subsystem referral 00:08:19.184 treq: not required 00:08:19.184 portid: 0 00:08:19.184 trsvcid: 4430 00:08:19.184 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:19.184 traddr: 10.0.0.2 00:08:19.184 eflags: none 00:08:19.184 sectype: none 00:08:19.184 02:27:52 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:19.184 Perform nvmf subsystem discovery via RPC 00:08:19.184 02:27:52 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:19.184 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.184 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.184 [2024-04-27 02:27:52.732359] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:19.184 [ 00:08:19.184 { 00:08:19.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:19.184 "subtype": "Discovery", 00:08:19.184 "listen_addresses": [ 00:08:19.184 { 00:08:19.184 "transport": "TCP", 00:08:19.184 "trtype": "TCP", 00:08:19.184 "adrfam": "IPv4", 00:08:19.184 "traddr": "10.0.0.2", 00:08:19.184 "trsvcid": "4420" 00:08:19.184 } 00:08:19.184 ], 00:08:19.184 "allow_any_host": true, 00:08:19.184 "hosts": [] 00:08:19.184 }, 00:08:19.184 { 00:08:19.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.184 "subtype": "NVMe", 00:08:19.184 "listen_addresses": [ 00:08:19.184 { 00:08:19.185 "transport": "TCP", 00:08:19.185 "trtype": "TCP", 00:08:19.185 "adrfam": "IPv4", 00:08:19.185 "traddr": "10.0.0.2", 00:08:19.185 "trsvcid": "4420" 00:08:19.185 } 00:08:19.185 ], 00:08:19.185 "allow_any_host": true, 00:08:19.185 "hosts": [], 00:08:19.185 "serial_number": "SPDK00000000000001", 00:08:19.185 "model_number": "SPDK bdev Controller", 00:08:19.185 "max_namespaces": 32, 00:08:19.185 "min_cntlid": 1, 00:08:19.185 "max_cntlid": 65519, 00:08:19.185 "namespaces": [ 00:08:19.185 { 00:08:19.185 "nsid": 1, 00:08:19.185 "bdev_name": "Null1", 00:08:19.185 "name": "Null1", 00:08:19.185 "nguid": "D6BF7ECD87BE452D9439746D33715791", 00:08:19.185 "uuid": "d6bf7ecd-87be-452d-9439-746d33715791" 00:08:19.185 } 00:08:19.185 ] 00:08:19.185 }, 00:08:19.185 { 00:08:19.185 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:19.185 "subtype": "NVMe", 00:08:19.185 "listen_addresses": [ 00:08:19.185 { 00:08:19.185 "transport": "TCP", 00:08:19.185 "trtype": "TCP", 00:08:19.185 "adrfam": "IPv4", 00:08:19.185 "traddr": "10.0.0.2", 00:08:19.185 "trsvcid": "4420" 00:08:19.185 } 00:08:19.185 ], 00:08:19.185 "allow_any_host": true, 00:08:19.185 "hosts": [], 00:08:19.185 "serial_number": "SPDK00000000000002", 00:08:19.185 "model_number": "SPDK bdev Controller", 00:08:19.185 "max_namespaces": 32, 00:08:19.185 "min_cntlid": 1, 00:08:19.185 "max_cntlid": 65519, 00:08:19.185 "namespaces": [ 00:08:19.185 { 00:08:19.185 "nsid": 1, 00:08:19.185 "bdev_name": "Null2", 00:08:19.185 "name": "Null2", 00:08:19.185 "nguid": "60318D20C4284F7DBAF4C3A4DA74FE55", 00:08:19.185 "uuid": "60318d20-c428-4f7d-baf4-c3a4da74fe55" 00:08:19.185 } 00:08:19.185 ] 00:08:19.185 }, 00:08:19.185 { 00:08:19.185 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:19.185 "subtype": "NVMe", 00:08:19.185 "listen_addresses": [ 00:08:19.185 { 00:08:19.185 "transport": "TCP", 00:08:19.185 "trtype": "TCP", 00:08:19.185 "adrfam": "IPv4", 00:08:19.185 "traddr": "10.0.0.2", 00:08:19.185 "trsvcid": "4420" 00:08:19.185 } 00:08:19.185 ], 00:08:19.185 "allow_any_host": true, 00:08:19.185 "hosts": [], 00:08:19.185 "serial_number": "SPDK00000000000003", 00:08:19.185 "model_number": "SPDK bdev Controller", 00:08:19.185 "max_namespaces": 32, 00:08:19.185 "min_cntlid": 1, 00:08:19.185 "max_cntlid": 65519, 00:08:19.185 "namespaces": [ 00:08:19.185 { 00:08:19.185 "nsid": 1, 00:08:19.185 "bdev_name": "Null3", 00:08:19.185 "name": "Null3", 00:08:19.185 "nguid": "26FEC7742F3B451782685D3265025FFE", 00:08:19.185 "uuid": "26fec774-2f3b-4517-8268-5d3265025ffe" 00:08:19.185 } 00:08:19.185 ] 00:08:19.185 }, 00:08:19.185 { 00:08:19.185 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:19.185 "subtype": "NVMe", 00:08:19.185 "listen_addresses": [ 00:08:19.185 { 00:08:19.185 "transport": "TCP", 00:08:19.185 "trtype": "TCP", 00:08:19.185 "adrfam": "IPv4", 00:08:19.185 "traddr": "10.0.0.2", 00:08:19.185 "trsvcid": "4420" 00:08:19.185 } 00:08:19.185 ], 00:08:19.185 "allow_any_host": true, 00:08:19.185 "hosts": [], 00:08:19.185 "serial_number": "SPDK00000000000004", 00:08:19.185 "model_number": "SPDK bdev Controller", 00:08:19.185 "max_namespaces": 32, 00:08:19.185 "min_cntlid": 1, 00:08:19.185 "max_cntlid": 65519, 00:08:19.185 "namespaces": [ 00:08:19.185 { 00:08:19.185 "nsid": 1, 00:08:19.185 "bdev_name": "Null4", 00:08:19.185 "name": "Null4", 00:08:19.185 "nguid": "2FF1D50D5EAE4DAB8B888CD9F71EF9D2", 00:08:19.185 "uuid": "2ff1d50d-5eae-4dab-8b88-8cd9f71ef9d2" 00:08:19.185 } 00:08:19.185 ] 00:08:19.185 } 00:08:19.185 ] 00:08:19.185 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.185 02:27:52 -- target/discovery.sh@42 -- # seq 1 4 00:08:19.185 02:27:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.185 02:27:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.185 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.185 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.185 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.185 02:27:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:19.185 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.185 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.185 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.185 02:27:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.185 02:27:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:19.185 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.185 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.185 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.185 02:27:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:19.185 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.185 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.185 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.185 02:27:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.185 02:27:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:19.185 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.185 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.446 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.446 02:27:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:19.446 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.446 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.446 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.446 02:27:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.446 02:27:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:19.446 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.446 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.446 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.446 02:27:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:19.446 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.446 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.446 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.446 02:27:52 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:19.446 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.446 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.446 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.446 02:27:52 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:19.446 02:27:52 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:19.446 02:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.446 02:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.446 02:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.446 02:27:52 -- target/discovery.sh@49 -- # check_bdevs= 00:08:19.446 02:27:52 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:19.446 02:27:52 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:19.446 02:27:52 -- target/discovery.sh@57 -- # nvmftestfini 00:08:19.446 02:27:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:19.446 02:27:52 -- nvmf/common.sh@117 -- # sync 00:08:19.446 02:27:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.446 02:27:52 -- nvmf/common.sh@120 -- # set +e 00:08:19.446 02:27:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.446 02:27:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.446 rmmod nvme_tcp 00:08:19.446 rmmod nvme_fabrics 00:08:19.446 rmmod nvme_keyring 00:08:19.446 02:27:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.446 02:27:52 -- nvmf/common.sh@124 -- # set -e 00:08:19.446 02:27:52 -- nvmf/common.sh@125 -- # return 0 00:08:19.446 02:27:52 -- nvmf/common.sh@478 -- # '[' -n 4151716 ']' 00:08:19.446 02:27:52 -- nvmf/common.sh@479 -- # killprocess 4151716 00:08:19.446 02:27:52 -- common/autotest_common.sh@936 -- # '[' -z 4151716 ']' 00:08:19.446 02:27:52 -- common/autotest_common.sh@940 -- # kill -0 4151716 00:08:19.446 02:27:52 -- common/autotest_common.sh@941 -- # uname 00:08:19.446 02:27:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.446 02:27:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4151716 00:08:19.446 02:27:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.446 02:27:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.446 02:27:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4151716' 00:08:19.446 killing process with pid 4151716 00:08:19.446 02:27:53 -- common/autotest_common.sh@955 -- # kill 4151716 00:08:19.446 [2024-04-27 02:27:53.010674] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:19.446 02:27:53 -- common/autotest_common.sh@960 -- # wait 4151716 00:08:19.706 02:27:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:19.706 02:27:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:19.706 02:27:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:19.706 02:27:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.706 02:27:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.706 02:27:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.706 02:27:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.706 02:27:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.623 02:27:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.623 00:08:21.623 real 0m10.915s 00:08:21.623 user 0m8.317s 00:08:21.623 sys 0m5.530s 00:08:21.623 02:27:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.623 02:27:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.623 ************************************ 00:08:21.623 END TEST nvmf_discovery 00:08:21.623 ************************************ 00:08:21.884 02:27:55 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:21.884 02:27:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.884 02:27:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.884 02:27:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.884 ************************************ 00:08:21.884 START TEST nvmf_referrals 00:08:21.884 ************************************ 00:08:21.884 02:27:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:21.884 * Looking for test storage... 00:08:21.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.884 02:27:55 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.884 02:27:55 -- nvmf/common.sh@7 -- # uname -s 00:08:22.146 02:27:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.146 02:27:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.146 02:27:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.146 02:27:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.146 02:27:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.146 02:27:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.146 02:27:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.146 02:27:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.146 02:27:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.146 02:27:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.146 02:27:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.146 02:27:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.146 02:27:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.146 02:27:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.146 02:27:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.146 02:27:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.146 02:27:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.146 02:27:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.146 02:27:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.146 02:27:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.146 02:27:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.146 02:27:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.146 02:27:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.146 02:27:55 -- paths/export.sh@5 -- # export PATH 00:08:22.146 02:27:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.146 02:27:55 -- nvmf/common.sh@47 -- # : 0 00:08:22.146 02:27:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.146 02:27:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.146 02:27:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.146 02:27:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.146 02:27:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.146 02:27:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.146 02:27:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.146 02:27:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.146 02:27:55 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:22.146 02:27:55 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:22.146 02:27:55 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:22.146 02:27:55 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:22.146 02:27:55 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:22.146 02:27:55 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:22.146 02:27:55 -- target/referrals.sh@37 -- # nvmftestinit 00:08:22.146 02:27:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:22.146 02:27:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.146 02:27:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:22.146 02:27:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:22.146 02:27:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:22.146 02:27:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.146 02:27:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.147 02:27:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.147 02:27:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:22.147 02:27:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:22.147 02:27:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.147 02:27:55 -- common/autotest_common.sh@10 -- # set +x 00:08:28.737 02:28:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:28.737 02:28:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.737 02:28:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.737 02:28:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.737 02:28:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.737 02:28:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.738 02:28:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.738 02:28:02 -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.738 02:28:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.738 02:28:02 -- nvmf/common.sh@296 -- # e810=() 00:08:28.738 02:28:02 -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.738 02:28:02 -- nvmf/common.sh@297 -- # x722=() 00:08:28.738 02:28:02 -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.738 02:28:02 -- nvmf/common.sh@298 -- # mlx=() 00:08:28.738 02:28:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.738 02:28:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.738 02:28:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.738 02:28:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:28.738 02:28:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.738 02:28:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.738 02:28:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:28.738 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:28.738 02:28:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.738 02:28:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:28.738 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:28.738 02:28:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.738 02:28:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.738 02:28:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.738 02:28:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:28.738 02:28:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.738 02:28:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:28.738 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:28.738 02:28:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.738 02:28:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.738 02:28:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.738 02:28:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:28.738 02:28:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.738 02:28:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:28.738 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:28.738 02:28:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.738 02:28:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:28.738 02:28:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:28.738 02:28:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:28.738 02:28:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:28.738 02:28:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.738 02:28:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.738 02:28:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.738 02:28:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:28.738 02:28:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.738 02:28:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.738 02:28:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:28.738 02:28:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.738 02:28:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.738 02:28:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:28.738 02:28:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:28.738 02:28:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.738 02:28:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.738 02:28:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.738 02:28:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.738 02:28:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:28.738 02:28:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.738 02:28:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.738 02:28:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.999 02:28:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:28.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:08:28.999 00:08:28.999 --- 10.0.0.2 ping statistics --- 00:08:28.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.999 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:08:28.999 02:28:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.499 ms 00:08:28.999 00:08:28.999 --- 10.0.0.1 ping statistics --- 00:08:28.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.999 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:08:28.999 02:28:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.999 02:28:02 -- nvmf/common.sh@411 -- # return 0 00:08:28.999 02:28:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:28.999 02:28:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.999 02:28:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:28.999 02:28:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:28.999 02:28:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.999 02:28:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:28.999 02:28:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:28.999 02:28:02 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:28.999 02:28:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:28.999 02:28:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:28.999 02:28:02 -- common/autotest_common.sh@10 -- # set +x 00:08:28.999 02:28:02 -- nvmf/common.sh@470 -- # nvmfpid=4156318 00:08:28.999 02:28:02 -- nvmf/common.sh@471 -- # waitforlisten 4156318 00:08:29.000 02:28:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.000 02:28:02 -- common/autotest_common.sh@817 -- # '[' -z 4156318 ']' 00:08:29.000 02:28:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.000 02:28:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:29.000 02:28:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.000 02:28:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:29.000 02:28:02 -- common/autotest_common.sh@10 -- # set +x 00:08:29.000 [2024-04-27 02:28:02.472963] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:08:29.000 [2024-04-27 02:28:02.473030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.000 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.000 [2024-04-27 02:28:02.544462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.000 [2024-04-27 02:28:02.616984] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.000 [2024-04-27 02:28:02.617022] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.000 [2024-04-27 02:28:02.617031] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.000 [2024-04-27 02:28:02.617039] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.000 [2024-04-27 02:28:02.617046] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.000 [2024-04-27 02:28:02.617160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.000 [2024-04-27 02:28:02.617292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.000 [2024-04-27 02:28:02.617385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.000 [2024-04-27 02:28:02.617389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.942 02:28:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:29.942 02:28:03 -- common/autotest_common.sh@850 -- # return 0 00:08:29.942 02:28:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:29.942 02:28:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 02:28:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.942 02:28:03 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 [2024-04-27 02:28:03.298880] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 [2024-04-27 02:28:03.315054] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- target/referrals.sh@48 -- # jq length 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:29.942 02:28:03 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:29.942 02:28:03 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.942 02:28:03 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.942 02:28:03 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.942 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.942 02:28:03 -- target/referrals.sh@21 -- # sort 00:08:29.942 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.942 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:29.942 02:28:03 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:29.942 02:28:03 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:29.942 02:28:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.942 02:28:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.942 02:28:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.942 02:28:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.942 02:28:03 -- target/referrals.sh@26 -- # sort 00:08:30.202 02:28:03 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:30.202 02:28:03 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:30.202 02:28:03 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:30.202 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.202 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.202 02:28:03 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:30.202 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.202 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.203 02:28:03 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:30.203 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.203 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.203 02:28:03 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.203 02:28:03 -- target/referrals.sh@56 -- # jq length 00:08:30.203 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.203 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.203 02:28:03 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:30.203 02:28:03 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:30.203 02:28:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.203 02:28:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.203 02:28:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.203 02:28:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.203 02:28:03 -- target/referrals.sh@26 -- # sort 00:08:30.203 02:28:03 -- target/referrals.sh@26 -- # echo 00:08:30.203 02:28:03 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:30.203 02:28:03 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:30.203 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.203 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.203 02:28:03 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:30.203 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.203 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.203 02:28:03 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:30.203 02:28:03 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.203 02:28:03 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.203 02:28:03 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.203 02:28:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.203 02:28:03 -- target/referrals.sh@21 -- # sort 00:08:30.203 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 02:28:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.463 02:28:03 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:30.463 02:28:03 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:30.463 02:28:03 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:30.463 02:28:03 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.463 02:28:03 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.463 02:28:03 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.463 02:28:03 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.463 02:28:03 -- target/referrals.sh@26 -- # sort 00:08:30.463 02:28:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:30.463 02:28:04 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:30.463 02:28:04 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:30.463 02:28:04 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:30.463 02:28:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:30.463 02:28:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.463 02:28:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:30.724 02:28:04 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:30.724 02:28:04 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:30.724 02:28:04 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:30.724 02:28:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:30.724 02:28:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.724 02:28:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:30.724 02:28:04 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:30.724 02:28:04 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:30.724 02:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.724 02:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:30.724 02:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.724 02:28:04 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:30.724 02:28:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.985 02:28:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.985 02:28:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.985 02:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.985 02:28:04 -- target/referrals.sh@21 -- # sort 00:08:30.985 02:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:30.985 02:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.985 02:28:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:30.985 02:28:04 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:30.985 02:28:04 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:30.985 02:28:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.985 02:28:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.985 02:28:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.985 02:28:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.985 02:28:04 -- target/referrals.sh@26 -- # sort 00:08:30.985 02:28:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:30.985 02:28:04 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:30.985 02:28:04 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:30.985 02:28:04 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:30.985 02:28:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:30.985 02:28:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.985 02:28:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:30.985 02:28:04 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:30.985 02:28:04 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:30.985 02:28:04 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:30.985 02:28:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:30.985 02:28:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.985 02:28:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:31.246 02:28:04 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:31.246 02:28:04 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:31.246 02:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.246 02:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.246 02:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.246 02:28:04 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.246 02:28:04 -- target/referrals.sh@82 -- # jq length 00:08:31.246 02:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.246 02:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.246 02:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.246 02:28:04 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:31.246 02:28:04 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:31.246 02:28:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.246 02:28:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.246 02:28:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.246 02:28:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.246 02:28:04 -- target/referrals.sh@26 -- # sort 00:08:31.246 02:28:04 -- target/referrals.sh@26 -- # echo 00:08:31.246 02:28:04 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:31.246 02:28:04 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:31.246 02:28:04 -- target/referrals.sh@86 -- # nvmftestfini 00:08:31.246 02:28:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:31.246 02:28:04 -- nvmf/common.sh@117 -- # sync 00:08:31.246 02:28:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.246 02:28:04 -- nvmf/common.sh@120 -- # set +e 00:08:31.246 02:28:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.246 02:28:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.246 rmmod nvme_tcp 00:08:31.246 rmmod nvme_fabrics 00:08:31.508 rmmod nvme_keyring 00:08:31.508 02:28:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.508 02:28:04 -- nvmf/common.sh@124 -- # set -e 00:08:31.508 02:28:04 -- nvmf/common.sh@125 -- # return 0 00:08:31.508 02:28:04 -- nvmf/common.sh@478 -- # '[' -n 4156318 ']' 00:08:31.508 02:28:04 -- nvmf/common.sh@479 -- # killprocess 4156318 00:08:31.508 02:28:04 -- common/autotest_common.sh@936 -- # '[' -z 4156318 ']' 00:08:31.508 02:28:04 -- common/autotest_common.sh@940 -- # kill -0 4156318 00:08:31.508 02:28:04 -- common/autotest_common.sh@941 -- # uname 00:08:31.508 02:28:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.508 02:28:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4156318 00:08:31.508 02:28:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:31.508 02:28:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:31.508 02:28:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4156318' 00:08:31.508 killing process with pid 4156318 00:08:31.508 02:28:04 -- common/autotest_common.sh@955 -- # kill 4156318 00:08:31.508 02:28:04 -- common/autotest_common.sh@960 -- # wait 4156318 00:08:31.508 02:28:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:31.508 02:28:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:31.508 02:28:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:31.508 02:28:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.508 02:28:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.508 02:28:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.508 02:28:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.508 02:28:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.056 02:28:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.056 00:08:34.056 real 0m11.760s 00:08:34.056 user 0m12.716s 00:08:34.056 sys 0m5.651s 00:08:34.056 02:28:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.056 02:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.056 ************************************ 00:08:34.056 END TEST nvmf_referrals 00:08:34.056 ************************************ 00:08:34.056 02:28:07 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:34.056 02:28:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:34.056 02:28:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.056 02:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.056 ************************************ 00:08:34.056 START TEST nvmf_connect_disconnect 00:08:34.056 ************************************ 00:08:34.056 02:28:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:34.056 * Looking for test storage... 00:08:34.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.056 02:28:07 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.056 02:28:07 -- nvmf/common.sh@7 -- # uname -s 00:08:34.056 02:28:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.056 02:28:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.056 02:28:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.056 02:28:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.056 02:28:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.056 02:28:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.056 02:28:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.056 02:28:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.056 02:28:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.056 02:28:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.056 02:28:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.056 02:28:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.056 02:28:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.056 02:28:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.056 02:28:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.056 02:28:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.056 02:28:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.056 02:28:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.056 02:28:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.056 02:28:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.056 02:28:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.056 02:28:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.056 02:28:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.056 02:28:07 -- paths/export.sh@5 -- # export PATH 00:08:34.056 02:28:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.056 02:28:07 -- nvmf/common.sh@47 -- # : 0 00:08:34.056 02:28:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.056 02:28:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.056 02:28:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.056 02:28:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.056 02:28:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.056 02:28:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.056 02:28:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.056 02:28:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.056 02:28:07 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.056 02:28:07 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.056 02:28:07 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:34.056 02:28:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:34.056 02:28:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.056 02:28:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:34.056 02:28:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:34.056 02:28:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:34.056 02:28:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.056 02:28:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.056 02:28:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.056 02:28:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:34.056 02:28:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:34.056 02:28:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.056 02:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:40.645 02:28:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:40.645 02:28:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.645 02:28:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.645 02:28:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.645 02:28:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.645 02:28:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.645 02:28:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.645 02:28:13 -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.645 02:28:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.645 02:28:13 -- nvmf/common.sh@296 -- # e810=() 00:08:40.645 02:28:13 -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.645 02:28:13 -- nvmf/common.sh@297 -- # x722=() 00:08:40.645 02:28:13 -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.645 02:28:13 -- nvmf/common.sh@298 -- # mlx=() 00:08:40.645 02:28:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.645 02:28:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.645 02:28:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.645 02:28:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.645 02:28:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.645 02:28:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.645 02:28:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:40.645 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:40.645 02:28:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.645 02:28:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:40.645 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:40.645 02:28:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.645 02:28:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.645 02:28:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.645 02:28:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.645 02:28:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.645 02:28:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.645 02:28:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:40.645 02:28:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.645 02:28:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:40.645 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:40.645 02:28:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.645 02:28:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.645 02:28:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.645 02:28:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:40.645 02:28:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.645 02:28:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:40.645 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:40.645 02:28:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.645 02:28:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:40.645 02:28:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:40.645 02:28:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:40.645 02:28:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:40.645 02:28:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:40.645 02:28:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.645 02:28:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.645 02:28:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.645 02:28:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.645 02:28:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.645 02:28:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.645 02:28:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.645 02:28:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.645 02:28:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.645 02:28:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.645 02:28:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.645 02:28:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.645 02:28:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.646 02:28:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.646 02:28:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.646 02:28:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.646 02:28:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.907 02:28:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.907 02:28:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.907 02:28:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:08:40.907 00:08:40.907 --- 10.0.0.2 ping statistics --- 00:08:40.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.907 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:08:40.907 02:28:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:08:40.907 00:08:40.907 --- 10.0.0.1 ping statistics --- 00:08:40.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.907 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:08:40.907 02:28:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.907 02:28:14 -- nvmf/common.sh@411 -- # return 0 00:08:40.907 02:28:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:40.907 02:28:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.907 02:28:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:40.907 02:28:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:40.907 02:28:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.907 02:28:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:40.907 02:28:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:40.907 02:28:14 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:40.907 02:28:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:40.907 02:28:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:40.907 02:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 02:28:14 -- nvmf/common.sh@470 -- # nvmfpid=4161092 00:08:40.907 02:28:14 -- nvmf/common.sh@471 -- # waitforlisten 4161092 00:08:40.907 02:28:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.907 02:28:14 -- common/autotest_common.sh@817 -- # '[' -z 4161092 ']' 00:08:40.907 02:28:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.907 02:28:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:40.907 02:28:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.907 02:28:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:40.907 02:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:40.907 [2024-04-27 02:28:14.420087] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:08:40.907 [2024-04-27 02:28:14.420136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.907 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.907 [2024-04-27 02:28:14.485628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.182 [2024-04-27 02:28:14.548581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.182 [2024-04-27 02:28:14.548620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.182 [2024-04-27 02:28:14.548629] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.182 [2024-04-27 02:28:14.548636] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.182 [2024-04-27 02:28:14.548643] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.182 [2024-04-27 02:28:14.548766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.182 [2024-04-27 02:28:14.548883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.182 [2024-04-27 02:28:14.549007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.182 [2024-04-27 02:28:14.549010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.759 02:28:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:41.759 02:28:15 -- common/autotest_common.sh@850 -- # return 0 00:08:41.759 02:28:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:41.759 02:28:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:41.759 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 02:28:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:41.759 02:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.759 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 [2024-04-27 02:28:15.234918] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.759 02:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:41.759 02:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.759 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 02:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.759 02:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.759 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 02:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.759 02:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.759 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 02:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.759 02:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.759 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.759 [2024-04-27 02:28:15.294344] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.759 02:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:41.759 02:28:15 -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.093 02:28:33 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:00.093 02:28:33 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:00.093 02:28:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:00.093 02:28:33 -- nvmf/common.sh@117 -- # sync 00:09:00.093 02:28:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.093 02:28:33 -- nvmf/common.sh@120 -- # set +e 00:09:00.093 02:28:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.093 02:28:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.093 rmmod nvme_tcp 00:09:00.093 rmmod nvme_fabrics 00:09:00.093 rmmod nvme_keyring 00:09:00.094 02:28:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.094 02:28:33 -- nvmf/common.sh@124 -- # set -e 00:09:00.094 02:28:33 -- nvmf/common.sh@125 -- # return 0 00:09:00.094 02:28:33 -- nvmf/common.sh@478 -- # '[' -n 4161092 ']' 00:09:00.094 02:28:33 -- nvmf/common.sh@479 -- # killprocess 4161092 00:09:00.094 02:28:33 -- common/autotest_common.sh@936 -- # '[' -z 4161092 ']' 00:09:00.094 02:28:33 -- common/autotest_common.sh@940 -- # kill -0 4161092 00:09:00.094 02:28:33 -- common/autotest_common.sh@941 -- # uname 00:09:00.094 02:28:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.094 02:28:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4161092 00:09:00.094 02:28:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.094 02:28:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.094 02:28:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4161092' 00:09:00.094 killing process with pid 4161092 00:09:00.094 02:28:33 -- common/autotest_common.sh@955 -- # kill 4161092 00:09:00.094 02:28:33 -- common/autotest_common.sh@960 -- # wait 4161092 00:09:00.094 02:28:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:00.094 02:28:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:00.094 02:28:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:00.094 02:28:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.094 02:28:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.094 02:28:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.094 02:28:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.094 02:28:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.642 02:28:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.642 00:09:02.642 real 0m28.429s 00:09:02.642 user 1m18.122s 00:09:02.642 sys 0m6.252s 00:09:02.642 02:28:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.642 02:28:35 -- common/autotest_common.sh@10 -- # set +x 00:09:02.642 ************************************ 00:09:02.642 END TEST nvmf_connect_disconnect 00:09:02.642 ************************************ 00:09:02.642 02:28:35 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:02.642 02:28:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.642 02:28:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.642 02:28:35 -- common/autotest_common.sh@10 -- # set +x 00:09:02.642 ************************************ 00:09:02.642 START TEST nvmf_multitarget 00:09:02.642 ************************************ 00:09:02.642 02:28:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:02.642 * Looking for test storage... 00:09:02.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.642 02:28:36 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.642 02:28:36 -- nvmf/common.sh@7 -- # uname -s 00:09:02.642 02:28:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.642 02:28:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.642 02:28:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.642 02:28:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.642 02:28:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.642 02:28:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.642 02:28:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.642 02:28:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.642 02:28:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.642 02:28:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.642 02:28:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.642 02:28:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.642 02:28:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.642 02:28:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.642 02:28:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.642 02:28:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.642 02:28:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.642 02:28:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.642 02:28:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.642 02:28:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.642 02:28:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.642 02:28:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.642 02:28:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.642 02:28:36 -- paths/export.sh@5 -- # export PATH 00:09:02.642 02:28:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.642 02:28:36 -- nvmf/common.sh@47 -- # : 0 00:09:02.642 02:28:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.642 02:28:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.642 02:28:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.642 02:28:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.642 02:28:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.642 02:28:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.642 02:28:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.642 02:28:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.642 02:28:36 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:02.642 02:28:36 -- target/multitarget.sh@15 -- # nvmftestinit 00:09:02.642 02:28:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:02.642 02:28:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.642 02:28:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:02.642 02:28:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:02.642 02:28:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:02.642 02:28:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.642 02:28:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.642 02:28:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.642 02:28:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:02.642 02:28:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:02.642 02:28:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.642 02:28:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.793 02:28:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:10.793 02:28:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.793 02:28:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.793 02:28:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.793 02:28:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.793 02:28:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.793 02:28:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.793 02:28:42 -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.793 02:28:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.793 02:28:42 -- nvmf/common.sh@296 -- # e810=() 00:09:10.793 02:28:42 -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.793 02:28:42 -- nvmf/common.sh@297 -- # x722=() 00:09:10.793 02:28:42 -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.793 02:28:42 -- nvmf/common.sh@298 -- # mlx=() 00:09:10.793 02:28:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.793 02:28:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.793 02:28:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.793 02:28:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.793 02:28:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.793 02:28:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.793 02:28:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:10.793 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:10.793 02:28:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.793 02:28:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:10.793 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:10.793 02:28:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.793 02:28:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.793 02:28:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.793 02:28:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:10.793 02:28:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.793 02:28:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:10.793 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:10.793 02:28:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.793 02:28:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.793 02:28:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.793 02:28:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:10.793 02:28:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.793 02:28:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:10.793 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:10.793 02:28:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.793 02:28:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:10.793 02:28:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:10.793 02:28:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:10.793 02:28:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:10.793 02:28:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.793 02:28:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.794 02:28:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.794 02:28:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.794 02:28:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.794 02:28:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.794 02:28:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.794 02:28:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.794 02:28:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.794 02:28:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.794 02:28:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.794 02:28:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.794 02:28:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.794 02:28:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.794 02:28:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.794 02:28:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.794 02:28:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.794 02:28:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.794 02:28:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.794 02:28:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:09:10.794 00:09:10.794 --- 10.0.0.2 ping statistics --- 00:09:10.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.794 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:09:10.794 02:28:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:09:10.794 00:09:10.794 --- 10.0.0.1 ping statistics --- 00:09:10.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.794 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:09:10.794 02:28:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.794 02:28:43 -- nvmf/common.sh@411 -- # return 0 00:09:10.794 02:28:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:10.794 02:28:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.794 02:28:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:10.794 02:28:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:10.794 02:28:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.794 02:28:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:10.794 02:28:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:10.794 02:28:43 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:10.794 02:28:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:10.794 02:28:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:10.794 02:28:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.794 02:28:43 -- nvmf/common.sh@470 -- # nvmfpid=4168966 00:09:10.794 02:28:43 -- nvmf/common.sh@471 -- # waitforlisten 4168966 00:09:10.794 02:28:43 -- common/autotest_common.sh@817 -- # '[' -z 4168966 ']' 00:09:10.794 02:28:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.794 02:28:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:10.794 02:28:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.794 02:28:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:10.794 02:28:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.794 02:28:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.794 [2024-04-27 02:28:43.305485] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:10.794 [2024-04-27 02:28:43.305552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.794 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.794 [2024-04-27 02:28:43.377582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.794 [2024-04-27 02:28:43.450678] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.794 [2024-04-27 02:28:43.450718] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.794 [2024-04-27 02:28:43.450727] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.794 [2024-04-27 02:28:43.450735] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.794 [2024-04-27 02:28:43.450741] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.794 [2024-04-27 02:28:43.450860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.794 [2024-04-27 02:28:43.450954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.794 [2024-04-27 02:28:43.451082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.794 [2024-04-27 02:28:43.451086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.794 02:28:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:10.794 02:28:44 -- common/autotest_common.sh@850 -- # return 0 00:09:10.794 02:28:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:10.794 02:28:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:10.794 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:09:10.794 02:28:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.794 02:28:44 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:10.794 02:28:44 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:10.794 02:28:44 -- target/multitarget.sh@21 -- # jq length 00:09:10.794 02:28:44 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:10.794 02:28:44 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:10.794 "nvmf_tgt_1" 00:09:10.794 02:28:44 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:10.794 "nvmf_tgt_2" 00:09:11.055 02:28:44 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:11.055 02:28:44 -- target/multitarget.sh@28 -- # jq length 00:09:11.055 02:28:44 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:11.055 02:28:44 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:11.055 true 00:09:11.055 02:28:44 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:11.316 true 00:09:11.316 02:28:44 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:11.316 02:28:44 -- target/multitarget.sh@35 -- # jq length 00:09:11.316 02:28:44 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:11.316 02:28:44 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:11.316 02:28:44 -- target/multitarget.sh@41 -- # nvmftestfini 00:09:11.316 02:28:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:11.316 02:28:44 -- nvmf/common.sh@117 -- # sync 00:09:11.316 02:28:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.316 02:28:44 -- nvmf/common.sh@120 -- # set +e 00:09:11.316 02:28:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.316 02:28:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.316 rmmod nvme_tcp 00:09:11.316 rmmod nvme_fabrics 00:09:11.316 rmmod nvme_keyring 00:09:11.316 02:28:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.316 02:28:44 -- nvmf/common.sh@124 -- # set -e 00:09:11.316 02:28:44 -- nvmf/common.sh@125 -- # return 0 00:09:11.316 02:28:44 -- nvmf/common.sh@478 -- # '[' -n 4168966 ']' 00:09:11.316 02:28:44 -- nvmf/common.sh@479 -- # killprocess 4168966 00:09:11.316 02:28:44 -- common/autotest_common.sh@936 -- # '[' -z 4168966 ']' 00:09:11.316 02:28:44 -- common/autotest_common.sh@940 -- # kill -0 4168966 00:09:11.316 02:28:44 -- common/autotest_common.sh@941 -- # uname 00:09:11.316 02:28:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.316 02:28:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4168966 00:09:11.316 02:28:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.316 02:28:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.316 02:28:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4168966' 00:09:11.316 killing process with pid 4168966 00:09:11.316 02:28:44 -- common/autotest_common.sh@955 -- # kill 4168966 00:09:11.316 02:28:44 -- common/autotest_common.sh@960 -- # wait 4168966 00:09:11.577 02:28:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:11.577 02:28:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:11.577 02:28:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:11.577 02:28:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.577 02:28:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.577 02:28:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.577 02:28:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.577 02:28:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.120 02:28:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.120 00:09:14.120 real 0m11.184s 00:09:14.120 user 0m9.140s 00:09:14.120 sys 0m5.816s 00:09:14.120 02:28:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.120 02:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.120 ************************************ 00:09:14.120 END TEST nvmf_multitarget 00:09:14.120 ************************************ 00:09:14.120 02:28:47 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:14.120 02:28:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:14.120 02:28:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.120 02:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.120 ************************************ 00:09:14.120 START TEST nvmf_rpc 00:09:14.120 ************************************ 00:09:14.120 02:28:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:14.120 * Looking for test storage... 00:09:14.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.120 02:28:47 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.120 02:28:47 -- nvmf/common.sh@7 -- # uname -s 00:09:14.120 02:28:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.120 02:28:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.120 02:28:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.120 02:28:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.120 02:28:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.120 02:28:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.120 02:28:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.120 02:28:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.120 02:28:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.121 02:28:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.121 02:28:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.121 02:28:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.121 02:28:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.121 02:28:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.121 02:28:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.121 02:28:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.121 02:28:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.121 02:28:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.121 02:28:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.121 02:28:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.121 02:28:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.121 02:28:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.121 02:28:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.121 02:28:47 -- paths/export.sh@5 -- # export PATH 00:09:14.121 02:28:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.121 02:28:47 -- nvmf/common.sh@47 -- # : 0 00:09:14.121 02:28:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.121 02:28:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.121 02:28:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.121 02:28:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.121 02:28:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.121 02:28:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.121 02:28:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.121 02:28:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.121 02:28:47 -- target/rpc.sh@11 -- # loops=5 00:09:14.121 02:28:47 -- target/rpc.sh@23 -- # nvmftestinit 00:09:14.121 02:28:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:14.121 02:28:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.121 02:28:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:14.121 02:28:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:14.121 02:28:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:14.121 02:28:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.121 02:28:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.121 02:28:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.121 02:28:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:14.121 02:28:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:14.121 02:28:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:14.121 02:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:20.717 02:28:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:20.717 02:28:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.717 02:28:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.717 02:28:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.717 02:28:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.717 02:28:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.717 02:28:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.717 02:28:53 -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.717 02:28:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.717 02:28:53 -- nvmf/common.sh@296 -- # e810=() 00:09:20.717 02:28:53 -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.717 02:28:53 -- nvmf/common.sh@297 -- # x722=() 00:09:20.717 02:28:53 -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.717 02:28:53 -- nvmf/common.sh@298 -- # mlx=() 00:09:20.717 02:28:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.717 02:28:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.717 02:28:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.717 02:28:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.717 02:28:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.717 02:28:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.717 02:28:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:20.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:20.717 02:28:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.717 02:28:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:20.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:20.717 02:28:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.717 02:28:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.717 02:28:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.717 02:28:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.717 02:28:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:20.717 02:28:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.717 02:28:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:20.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:20.717 02:28:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.717 02:28:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.717 02:28:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.717 02:28:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:20.717 02:28:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.717 02:28:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:20.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:20.717 02:28:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.717 02:28:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:20.718 02:28:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:20.718 02:28:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:20.718 02:28:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:20.718 02:28:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:20.718 02:28:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.718 02:28:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.718 02:28:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.718 02:28:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.718 02:28:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.718 02:28:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.718 02:28:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.718 02:28:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.718 02:28:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.718 02:28:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.718 02:28:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.718 02:28:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.718 02:28:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.718 02:28:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.718 02:28:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.718 02:28:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.718 02:28:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.718 02:28:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.718 02:28:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.718 02:28:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:09:20.718 00:09:20.718 --- 10.0.0.2 ping statistics --- 00:09:20.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.718 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:09:20.718 02:28:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:09:20.718 00:09:20.718 --- 10.0.0.1 ping statistics --- 00:09:20.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.718 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:09:20.718 02:28:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.718 02:28:54 -- nvmf/common.sh@411 -- # return 0 00:09:20.718 02:28:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:20.718 02:28:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.718 02:28:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:20.718 02:28:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:20.718 02:28:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.718 02:28:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:20.718 02:28:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:20.718 02:28:54 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:20.718 02:28:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:20.718 02:28:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:20.718 02:28:54 -- common/autotest_common.sh@10 -- # set +x 00:09:20.718 02:28:54 -- nvmf/common.sh@470 -- # nvmfpid=4173605 00:09:20.718 02:28:54 -- nvmf/common.sh@471 -- # waitforlisten 4173605 00:09:20.718 02:28:54 -- common/autotest_common.sh@817 -- # '[' -z 4173605 ']' 00:09:20.718 02:28:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.718 02:28:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:20.718 02:28:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.718 02:28:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:20.718 02:28:54 -- common/autotest_common.sh@10 -- # set +x 00:09:20.718 02:28:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.718 [2024-04-27 02:28:54.289635] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:20.718 [2024-04-27 02:28:54.289701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.718 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.980 [2024-04-27 02:28:54.361390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.980 [2024-04-27 02:28:54.435335] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.980 [2024-04-27 02:28:54.435376] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.980 [2024-04-27 02:28:54.435389] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.980 [2024-04-27 02:28:54.435397] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.980 [2024-04-27 02:28:54.435402] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.980 [2024-04-27 02:28:54.435517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.980 [2024-04-27 02:28:54.435639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.980 [2024-04-27 02:28:54.435766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.980 [2024-04-27 02:28:54.435768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.552 02:28:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.552 02:28:55 -- common/autotest_common.sh@850 -- # return 0 00:09:21.552 02:28:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:21.552 02:28:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:21.552 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.552 02:28:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.552 02:28:55 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:21.552 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.552 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.552 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.552 02:28:55 -- target/rpc.sh@26 -- # stats='{ 00:09:21.552 "tick_rate": 2400000000, 00:09:21.552 "poll_groups": [ 00:09:21.552 { 00:09:21.552 "name": "nvmf_tgt_poll_group_0", 00:09:21.552 "admin_qpairs": 0, 00:09:21.552 "io_qpairs": 0, 00:09:21.552 "current_admin_qpairs": 0, 00:09:21.552 "current_io_qpairs": 0, 00:09:21.552 "pending_bdev_io": 0, 00:09:21.552 "completed_nvme_io": 0, 00:09:21.552 "transports": [] 00:09:21.552 }, 00:09:21.552 { 00:09:21.552 "name": "nvmf_tgt_poll_group_1", 00:09:21.552 "admin_qpairs": 0, 00:09:21.552 "io_qpairs": 0, 00:09:21.552 "current_admin_qpairs": 0, 00:09:21.552 "current_io_qpairs": 0, 00:09:21.552 "pending_bdev_io": 0, 00:09:21.552 "completed_nvme_io": 0, 00:09:21.552 "transports": [] 00:09:21.552 }, 00:09:21.552 { 00:09:21.552 "name": "nvmf_tgt_poll_group_2", 00:09:21.552 "admin_qpairs": 0, 00:09:21.552 "io_qpairs": 0, 00:09:21.552 "current_admin_qpairs": 0, 00:09:21.552 "current_io_qpairs": 0, 00:09:21.552 "pending_bdev_io": 0, 00:09:21.552 "completed_nvme_io": 0, 00:09:21.552 "transports": [] 00:09:21.552 }, 00:09:21.552 { 00:09:21.552 "name": "nvmf_tgt_poll_group_3", 00:09:21.552 "admin_qpairs": 0, 00:09:21.552 "io_qpairs": 0, 00:09:21.552 "current_admin_qpairs": 0, 00:09:21.552 "current_io_qpairs": 0, 00:09:21.552 "pending_bdev_io": 0, 00:09:21.552 "completed_nvme_io": 0, 00:09:21.552 "transports": [] 00:09:21.552 } 00:09:21.552 ] 00:09:21.552 }' 00:09:21.552 02:28:55 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:21.552 02:28:55 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:21.552 02:28:55 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:21.552 02:28:55 -- target/rpc.sh@15 -- # wc -l 00:09:21.552 02:28:55 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:21.552 02:28:55 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:21.814 02:28:55 -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:21.814 02:28:55 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 [2024-04-27 02:28:55.213270] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@33 -- # stats='{ 00:09:21.814 "tick_rate": 2400000000, 00:09:21.814 "poll_groups": [ 00:09:21.814 { 00:09:21.814 "name": "nvmf_tgt_poll_group_0", 00:09:21.814 "admin_qpairs": 0, 00:09:21.814 "io_qpairs": 0, 00:09:21.814 "current_admin_qpairs": 0, 00:09:21.814 "current_io_qpairs": 0, 00:09:21.814 "pending_bdev_io": 0, 00:09:21.814 "completed_nvme_io": 0, 00:09:21.814 "transports": [ 00:09:21.814 { 00:09:21.814 "trtype": "TCP" 00:09:21.814 } 00:09:21.814 ] 00:09:21.814 }, 00:09:21.814 { 00:09:21.814 "name": "nvmf_tgt_poll_group_1", 00:09:21.814 "admin_qpairs": 0, 00:09:21.814 "io_qpairs": 0, 00:09:21.814 "current_admin_qpairs": 0, 00:09:21.814 "current_io_qpairs": 0, 00:09:21.814 "pending_bdev_io": 0, 00:09:21.814 "completed_nvme_io": 0, 00:09:21.814 "transports": [ 00:09:21.814 { 00:09:21.814 "trtype": "TCP" 00:09:21.814 } 00:09:21.814 ] 00:09:21.814 }, 00:09:21.814 { 00:09:21.814 "name": "nvmf_tgt_poll_group_2", 00:09:21.814 "admin_qpairs": 0, 00:09:21.814 "io_qpairs": 0, 00:09:21.814 "current_admin_qpairs": 0, 00:09:21.814 "current_io_qpairs": 0, 00:09:21.814 "pending_bdev_io": 0, 00:09:21.814 "completed_nvme_io": 0, 00:09:21.814 "transports": [ 00:09:21.814 { 00:09:21.814 "trtype": "TCP" 00:09:21.814 } 00:09:21.814 ] 00:09:21.814 }, 00:09:21.814 { 00:09:21.814 "name": "nvmf_tgt_poll_group_3", 00:09:21.814 "admin_qpairs": 0, 00:09:21.814 "io_qpairs": 0, 00:09:21.814 "current_admin_qpairs": 0, 00:09:21.814 "current_io_qpairs": 0, 00:09:21.814 "pending_bdev_io": 0, 00:09:21.814 "completed_nvme_io": 0, 00:09:21.814 "transports": [ 00:09:21.814 { 00:09:21.814 "trtype": "TCP" 00:09:21.814 } 00:09:21.814 ] 00:09:21.814 } 00:09:21.814 ] 00:09:21.814 }' 00:09:21.814 02:28:55 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:21.814 02:28:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:21.814 02:28:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:21.814 02:28:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.814 02:28:55 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:21.814 02:28:55 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:21.814 02:28:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:21.814 02:28:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:21.814 02:28:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.814 02:28:55 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:21.814 02:28:55 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:21.814 02:28:55 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:21.814 02:28:55 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:21.814 02:28:55 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 Malloc1 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.814 [2024-04-27 02:28:55.401063] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.814 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.814 02:28:55 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:21.814 02:28:55 -- common/autotest_common.sh@638 -- # local es=0 00:09:21.814 02:28:55 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:21.814 02:28:55 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:21.814 02:28:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:21.814 02:28:55 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:21.814 02:28:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:21.814 02:28:55 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:21.814 02:28:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:21.814 02:28:55 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:21.814 02:28:55 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:21.814 02:28:55 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:21.814 [2024-04-27 02:28:55.428097] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:21.814 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:21.814 could not add new controller: failed to write to nvme-fabrics device 00:09:21.814 02:28:55 -- common/autotest_common.sh@641 -- # es=1 00:09:21.814 02:28:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:21.814 02:28:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:21.814 02:28:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:21.814 02:28:55 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.814 02:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.814 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.076 02:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.076 02:28:55 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.490 02:28:56 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.490 02:28:56 -- common/autotest_common.sh@1184 -- # local i=0 00:09:23.490 02:28:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.490 02:28:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:23.490 02:28:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:25.406 02:28:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:25.406 02:28:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:25.406 02:28:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.406 02:28:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:25.406 02:28:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.406 02:28:59 -- common/autotest_common.sh@1194 -- # return 0 00:09:25.406 02:28:59 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.704 02:28:59 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.704 02:28:59 -- common/autotest_common.sh@1205 -- # local i=0 00:09:25.704 02:28:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:25.704 02:28:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.704 02:28:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:25.704 02:28:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.704 02:28:59 -- common/autotest_common.sh@1217 -- # return 0 00:09:25.704 02:28:59 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.704 02:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.704 02:28:59 -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 02:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.704 02:28:59 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.704 02:28:59 -- common/autotest_common.sh@638 -- # local es=0 00:09:25.704 02:28:59 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.704 02:28:59 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:25.704 02:28:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:25.704 02:28:59 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:25.704 02:28:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:25.704 02:28:59 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:25.704 02:28:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:25.704 02:28:59 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:25.704 02:28:59 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:25.704 02:28:59 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.704 [2024-04-27 02:28:59.145164] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:25.704 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:25.704 could not add new controller: failed to write to nvme-fabrics device 00:09:25.704 02:28:59 -- common/autotest_common.sh@641 -- # es=1 00:09:25.704 02:28:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:25.704 02:28:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:25.704 02:28:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:25.704 02:28:59 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:25.704 02:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.704 02:28:59 -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 02:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.704 02:28:59 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.111 02:29:00 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.111 02:29:00 -- common/autotest_common.sh@1184 -- # local i=0 00:09:27.111 02:29:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.111 02:29:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:27.111 02:29:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:29.026 02:29:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:29.026 02:29:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:29.026 02:29:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.286 02:29:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:29.286 02:29:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.286 02:29:02 -- common/autotest_common.sh@1194 -- # return 0 00:09:29.286 02:29:02 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.286 02:29:02 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.286 02:29:02 -- common/autotest_common.sh@1205 -- # local i=0 00:09:29.286 02:29:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:29.286 02:29:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.286 02:29:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:29.286 02:29:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.286 02:29:02 -- common/autotest_common.sh@1217 -- # return 0 00:09:29.286 02:29:02 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.286 02:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.286 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 02:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.286 02:29:02 -- target/rpc.sh@81 -- # seq 1 5 00:09:29.286 02:29:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.286 02:29:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.286 02:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.286 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:29.547 02:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.547 02:29:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.547 02:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.547 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:29.547 [2024-04-27 02:29:02.919710] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.547 02:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.547 02:29:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.547 02:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.547 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:29.547 02:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.547 02:29:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.547 02:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.547 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:29.547 02:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.547 02:29:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.929 02:29:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.929 02:29:04 -- common/autotest_common.sh@1184 -- # local i=0 00:09:30.929 02:29:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.929 02:29:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:30.929 02:29:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:33.474 02:29:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:33.474 02:29:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:33.474 02:29:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.474 02:29:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:33.474 02:29:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.474 02:29:06 -- common/autotest_common.sh@1194 -- # return 0 00:09:33.474 02:29:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.474 02:29:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.474 02:29:06 -- common/autotest_common.sh@1205 -- # local i=0 00:09:33.474 02:29:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:33.474 02:29:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.474 02:29:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:33.474 02:29:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.474 02:29:06 -- common/autotest_common.sh@1217 -- # return 0 00:09:33.474 02:29:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.474 02:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.474 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.474 02:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.474 02:29:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.474 02:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.474 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.474 02:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.474 02:29:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:33.474 02:29:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:33.474 02:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.474 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.474 02:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.474 02:29:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.474 02:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.474 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.474 [2024-04-27 02:29:06.644059] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.474 02:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.474 02:29:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:33.474 02:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.474 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.474 02:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.474 02:29:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:33.474 02:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.474 02:29:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.474 02:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.475 02:29:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.859 02:29:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.859 02:29:08 -- common/autotest_common.sh@1184 -- # local i=0 00:09:34.859 02:29:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.859 02:29:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:34.859 02:29:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:36.774 02:29:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:36.774 02:29:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:36.774 02:29:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.774 02:29:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:36.774 02:29:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.774 02:29:10 -- common/autotest_common.sh@1194 -- # return 0 00:09:36.774 02:29:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.774 02:29:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.774 02:29:10 -- common/autotest_common.sh@1205 -- # local i=0 00:09:36.774 02:29:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:36.774 02:29:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.774 02:29:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:36.774 02:29:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.774 02:29:10 -- common/autotest_common.sh@1217 -- # return 0 00:09:36.774 02:29:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.774 02:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.774 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 02:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.774 02:29:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.774 02:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.774 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 02:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.774 02:29:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:36.774 02:29:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.774 02:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.774 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 02:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.774 02:29:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.774 02:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.774 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 [2024-04-27 02:29:10.321988] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.774 02:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.774 02:29:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:36.774 02:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.774 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 02:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.774 02:29:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.774 02:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.774 02:29:10 -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 02:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.774 02:29:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.691 02:29:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.691 02:29:11 -- common/autotest_common.sh@1184 -- # local i=0 00:09:38.691 02:29:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.691 02:29:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:38.691 02:29:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:40.606 02:29:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:40.606 02:29:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:40.606 02:29:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.606 02:29:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:40.606 02:29:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.606 02:29:13 -- common/autotest_common.sh@1194 -- # return 0 00:09:40.606 02:29:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.606 02:29:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.606 02:29:13 -- common/autotest_common.sh@1205 -- # local i=0 00:09:40.606 02:29:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:40.606 02:29:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.606 02:29:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:40.606 02:29:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.606 02:29:14 -- common/autotest_common.sh@1217 -- # return 0 00:09:40.606 02:29:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.606 02:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.606 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.606 02:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.606 02:29:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.606 02:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.606 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.606 02:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.606 02:29:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:40.606 02:29:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:40.606 02:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.606 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.606 02:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.606 02:29:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.606 02:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.606 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.606 [2024-04-27 02:29:14.053557] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.606 02:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.606 02:29:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:40.606 02:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.606 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.606 02:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.606 02:29:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:40.606 02:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.606 02:29:14 -- common/autotest_common.sh@10 -- # set +x 00:09:40.606 02:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.606 02:29:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.522 02:29:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.522 02:29:15 -- common/autotest_common.sh@1184 -- # local i=0 00:09:42.522 02:29:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.522 02:29:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:42.522 02:29:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:44.436 02:29:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:44.436 02:29:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:44.436 02:29:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.436 02:29:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:44.436 02:29:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.436 02:29:17 -- common/autotest_common.sh@1194 -- # return 0 00:09:44.436 02:29:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.436 02:29:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.436 02:29:17 -- common/autotest_common.sh@1205 -- # local i=0 00:09:44.436 02:29:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:44.436 02:29:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.436 02:29:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:44.436 02:29:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.436 02:29:17 -- common/autotest_common.sh@1217 -- # return 0 00:09:44.436 02:29:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.436 02:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.436 02:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 02:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.436 02:29:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.436 02:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.436 02:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 02:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.436 02:29:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:44.436 02:29:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.436 02:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.436 02:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 02:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.436 02:29:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.436 02:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.436 02:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 [2024-04-27 02:29:17.785487] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.436 02:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.436 02:29:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:44.436 02:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.436 02:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 02:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.436 02:29:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.436 02:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.436 02:29:17 -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 02:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.436 02:29:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.821 02:29:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.821 02:29:19 -- common/autotest_common.sh@1184 -- # local i=0 00:09:45.821 02:29:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.821 02:29:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:45.821 02:29:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:47.736 02:29:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:47.736 02:29:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:47.736 02:29:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:47.736 02:29:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:47.736 02:29:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:47.736 02:29:21 -- common/autotest_common.sh@1194 -- # return 0 00:09:47.736 02:29:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:47.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.997 02:29:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:47.997 02:29:21 -- common/autotest_common.sh@1205 -- # local i=0 00:09:47.997 02:29:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:47.997 02:29:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.998 02:29:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:47.998 02:29:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.998 02:29:21 -- common/autotest_common.sh@1217 -- # return 0 00:09:47.998 02:29:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@99 -- # seq 1 5 00:09:47.998 02:29:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:47.998 02:29:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 [2024-04-27 02:29:21.434818] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:47.998 02:29:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 [2024-04-27 02:29:21.494965] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:47.998 02:29:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 [2024-04-27 02:29:21.551141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:47.998 02:29:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 [2024-04-27 02:29:21.611328] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.998 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.998 02:29:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.998 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.998 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.259 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.259 02:29:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.259 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.259 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.259 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.259 02:29:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.259 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.259 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.259 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.259 02:29:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.259 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.259 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.259 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.259 02:29:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:48.259 02:29:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.259 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.259 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.259 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.259 02:29:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.259 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.259 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.259 [2024-04-27 02:29:21.671534] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.259 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.260 02:29:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.260 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.260 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.260 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.260 02:29:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.260 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.260 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.260 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.260 02:29:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.260 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.260 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.260 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.260 02:29:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.260 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.260 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.260 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.260 02:29:21 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:48.260 02:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.260 02:29:21 -- common/autotest_common.sh@10 -- # set +x 00:09:48.260 02:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.260 02:29:21 -- target/rpc.sh@110 -- # stats='{ 00:09:48.260 "tick_rate": 2400000000, 00:09:48.260 "poll_groups": [ 00:09:48.260 { 00:09:48.260 "name": "nvmf_tgt_poll_group_0", 00:09:48.260 "admin_qpairs": 0, 00:09:48.260 "io_qpairs": 224, 00:09:48.260 "current_admin_qpairs": 0, 00:09:48.260 "current_io_qpairs": 0, 00:09:48.260 "pending_bdev_io": 0, 00:09:48.260 "completed_nvme_io": 345, 00:09:48.260 "transports": [ 00:09:48.260 { 00:09:48.260 "trtype": "TCP" 00:09:48.260 } 00:09:48.260 ] 00:09:48.260 }, 00:09:48.260 { 00:09:48.260 "name": "nvmf_tgt_poll_group_1", 00:09:48.260 "admin_qpairs": 1, 00:09:48.260 "io_qpairs": 223, 00:09:48.260 "current_admin_qpairs": 0, 00:09:48.260 "current_io_qpairs": 0, 00:09:48.260 "pending_bdev_io": 0, 00:09:48.260 "completed_nvme_io": 350, 00:09:48.260 "transports": [ 00:09:48.260 { 00:09:48.260 "trtype": "TCP" 00:09:48.260 } 00:09:48.260 ] 00:09:48.260 }, 00:09:48.260 { 00:09:48.260 "name": "nvmf_tgt_poll_group_2", 00:09:48.260 "admin_qpairs": 6, 00:09:48.260 "io_qpairs": 218, 00:09:48.260 "current_admin_qpairs": 0, 00:09:48.260 "current_io_qpairs": 0, 00:09:48.260 "pending_bdev_io": 0, 00:09:48.260 "completed_nvme_io": 267, 00:09:48.260 "transports": [ 00:09:48.260 { 00:09:48.260 "trtype": "TCP" 00:09:48.260 } 00:09:48.260 ] 00:09:48.260 }, 00:09:48.260 { 00:09:48.260 "name": "nvmf_tgt_poll_group_3", 00:09:48.260 "admin_qpairs": 0, 00:09:48.260 "io_qpairs": 224, 00:09:48.260 "current_admin_qpairs": 0, 00:09:48.260 "current_io_qpairs": 0, 00:09:48.260 "pending_bdev_io": 0, 00:09:48.260 "completed_nvme_io": 277, 00:09:48.260 "transports": [ 00:09:48.260 { 00:09:48.260 "trtype": "TCP" 00:09:48.260 } 00:09:48.260 ] 00:09:48.260 } 00:09:48.260 ] 00:09:48.260 }' 00:09:48.260 02:29:21 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:48.260 02:29:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:48.260 02:29:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:48.260 02:29:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:48.260 02:29:21 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:48.260 02:29:21 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:48.260 02:29:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:48.260 02:29:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:48.260 02:29:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:48.260 02:29:21 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:48.260 02:29:21 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:48.260 02:29:21 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:48.260 02:29:21 -- target/rpc.sh@123 -- # nvmftestfini 00:09:48.260 02:29:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:48.260 02:29:21 -- nvmf/common.sh@117 -- # sync 00:09:48.260 02:29:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.260 02:29:21 -- nvmf/common.sh@120 -- # set +e 00:09:48.260 02:29:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.260 02:29:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.260 rmmod nvme_tcp 00:09:48.260 rmmod nvme_fabrics 00:09:48.260 rmmod nvme_keyring 00:09:48.521 02:29:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.521 02:29:21 -- nvmf/common.sh@124 -- # set -e 00:09:48.521 02:29:21 -- nvmf/common.sh@125 -- # return 0 00:09:48.521 02:29:21 -- nvmf/common.sh@478 -- # '[' -n 4173605 ']' 00:09:48.521 02:29:21 -- nvmf/common.sh@479 -- # killprocess 4173605 00:09:48.521 02:29:21 -- common/autotest_common.sh@936 -- # '[' -z 4173605 ']' 00:09:48.521 02:29:21 -- common/autotest_common.sh@940 -- # kill -0 4173605 00:09:48.521 02:29:21 -- common/autotest_common.sh@941 -- # uname 00:09:48.521 02:29:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:48.521 02:29:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4173605 00:09:48.521 02:29:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:48.521 02:29:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:48.521 02:29:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4173605' 00:09:48.521 killing process with pid 4173605 00:09:48.521 02:29:21 -- common/autotest_common.sh@955 -- # kill 4173605 00:09:48.521 02:29:21 -- common/autotest_common.sh@960 -- # wait 4173605 00:09:48.521 02:29:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:48.521 02:29:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:48.521 02:29:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:48.521 02:29:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.521 02:29:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.521 02:29:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.521 02:29:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.521 02:29:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.075 02:29:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.075 00:09:51.075 real 0m36.851s 00:09:51.075 user 1m52.287s 00:09:51.075 sys 0m6.740s 00:09:51.075 02:29:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.075 02:29:24 -- common/autotest_common.sh@10 -- # set +x 00:09:51.075 ************************************ 00:09:51.075 END TEST nvmf_rpc 00:09:51.075 ************************************ 00:09:51.075 02:29:24 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:51.075 02:29:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:51.075 02:29:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.075 02:29:24 -- common/autotest_common.sh@10 -- # set +x 00:09:51.075 ************************************ 00:09:51.075 START TEST nvmf_invalid 00:09:51.075 ************************************ 00:09:51.075 02:29:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:51.075 * Looking for test storage... 00:09:51.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.075 02:29:24 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.075 02:29:24 -- nvmf/common.sh@7 -- # uname -s 00:09:51.075 02:29:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.075 02:29:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.075 02:29:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.075 02:29:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.075 02:29:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.075 02:29:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.075 02:29:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.075 02:29:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.075 02:29:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.075 02:29:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.075 02:29:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:51.075 02:29:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:51.075 02:29:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.075 02:29:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.075 02:29:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.075 02:29:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.075 02:29:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.075 02:29:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.075 02:29:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.075 02:29:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.076 02:29:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.076 02:29:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.076 02:29:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.076 02:29:24 -- paths/export.sh@5 -- # export PATH 00:09:51.076 02:29:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.076 02:29:24 -- nvmf/common.sh@47 -- # : 0 00:09:51.076 02:29:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.076 02:29:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.076 02:29:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.076 02:29:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.076 02:29:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.076 02:29:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.076 02:29:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.076 02:29:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.076 02:29:24 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:51.076 02:29:24 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.076 02:29:24 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:51.076 02:29:24 -- target/invalid.sh@14 -- # target=foobar 00:09:51.076 02:29:24 -- target/invalid.sh@16 -- # RANDOM=0 00:09:51.076 02:29:24 -- target/invalid.sh@34 -- # nvmftestinit 00:09:51.076 02:29:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:51.076 02:29:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.076 02:29:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:51.076 02:29:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:51.076 02:29:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:51.076 02:29:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.076 02:29:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.076 02:29:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.076 02:29:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:51.076 02:29:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:51.076 02:29:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.076 02:29:24 -- common/autotest_common.sh@10 -- # set +x 00:09:57.667 02:29:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:57.667 02:29:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.667 02:29:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.667 02:29:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.667 02:29:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.667 02:29:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.667 02:29:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.667 02:29:31 -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.667 02:29:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.667 02:29:31 -- nvmf/common.sh@296 -- # e810=() 00:09:57.667 02:29:31 -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.667 02:29:31 -- nvmf/common.sh@297 -- # x722=() 00:09:57.667 02:29:31 -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.667 02:29:31 -- nvmf/common.sh@298 -- # mlx=() 00:09:57.667 02:29:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.667 02:29:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.667 02:29:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.667 02:29:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:57.667 02:29:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.667 02:29:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.667 02:29:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:57.667 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:57.667 02:29:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.667 02:29:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:57.667 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:57.667 02:29:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.667 02:29:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.668 02:29:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.668 02:29:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.668 02:29:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:57.668 02:29:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.668 02:29:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:57.668 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:57.668 02:29:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.668 02:29:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.668 02:29:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.668 02:29:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:57.668 02:29:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.668 02:29:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:57.668 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:57.668 02:29:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.668 02:29:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:57.668 02:29:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:57.668 02:29:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:57.668 02:29:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:57.668 02:29:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.668 02:29:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.668 02:29:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.668 02:29:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:57.668 02:29:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.668 02:29:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.668 02:29:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:57.668 02:29:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.668 02:29:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.668 02:29:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:57.668 02:29:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:57.668 02:29:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.668 02:29:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.929 02:29:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.929 02:29:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.929 02:29:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:57.929 02:29:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.929 02:29:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.929 02:29:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.191 02:29:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:09:58.191 00:09:58.191 --- 10.0.0.2 ping statistics --- 00:09:58.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.191 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:09:58.191 02:29:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:09:58.191 00:09:58.191 --- 10.0.0.1 ping statistics --- 00:09:58.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.191 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:09:58.191 02:29:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.191 02:29:31 -- nvmf/common.sh@411 -- # return 0 00:09:58.191 02:29:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:58.191 02:29:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.191 02:29:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:58.191 02:29:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:58.191 02:29:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.191 02:29:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:58.191 02:29:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:58.191 02:29:31 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:58.191 02:29:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:58.191 02:29:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:58.191 02:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:58.191 02:29:31 -- nvmf/common.sh@470 -- # nvmfpid=4183306 00:09:58.191 02:29:31 -- nvmf/common.sh@471 -- # waitforlisten 4183306 00:09:58.191 02:29:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.191 02:29:31 -- common/autotest_common.sh@817 -- # '[' -z 4183306 ']' 00:09:58.191 02:29:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.191 02:29:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:58.191 02:29:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.191 02:29:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:58.191 02:29:31 -- common/autotest_common.sh@10 -- # set +x 00:09:58.191 [2024-04-27 02:29:31.682331] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:58.191 [2024-04-27 02:29:31.682394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.191 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.191 [2024-04-27 02:29:31.754213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.452 [2024-04-27 02:29:31.828016] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.452 [2024-04-27 02:29:31.828061] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.452 [2024-04-27 02:29:31.828069] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.452 [2024-04-27 02:29:31.828076] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.452 [2024-04-27 02:29:31.828082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.452 [2024-04-27 02:29:31.828126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.453 [2024-04-27 02:29:31.828274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.453 [2024-04-27 02:29:31.828444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.453 [2024-04-27 02:29:31.828546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.025 02:29:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:59.025 02:29:32 -- common/autotest_common.sh@850 -- # return 0 00:09:59.025 02:29:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:59.025 02:29:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:59.025 02:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:59.025 02:29:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.025 02:29:32 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:59.025 02:29:32 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20632 00:09:59.025 [2024-04-27 02:29:32.642221] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:59.286 02:29:32 -- target/invalid.sh@40 -- # out='request: 00:09:59.286 { 00:09:59.286 "nqn": "nqn.2016-06.io.spdk:cnode20632", 00:09:59.286 "tgt_name": "foobar", 00:09:59.286 "method": "nvmf_create_subsystem", 00:09:59.286 "req_id": 1 00:09:59.286 } 00:09:59.286 Got JSON-RPC error response 00:09:59.286 response: 00:09:59.286 { 00:09:59.286 "code": -32603, 00:09:59.286 "message": "Unable to find target foobar" 00:09:59.286 }' 00:09:59.286 02:29:32 -- target/invalid.sh@41 -- # [[ request: 00:09:59.286 { 00:09:59.286 "nqn": "nqn.2016-06.io.spdk:cnode20632", 00:09:59.286 "tgt_name": "foobar", 00:09:59.286 "method": "nvmf_create_subsystem", 00:09:59.286 "req_id": 1 00:09:59.286 } 00:09:59.286 Got JSON-RPC error response 00:09:59.286 response: 00:09:59.286 { 00:09:59.286 "code": -32603, 00:09:59.286 "message": "Unable to find target foobar" 00:09:59.286 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:59.286 02:29:32 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:59.286 02:29:32 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32206 00:09:59.286 [2024-04-27 02:29:32.814815] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32206: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:59.286 02:29:32 -- target/invalid.sh@45 -- # out='request: 00:09:59.286 { 00:09:59.286 "nqn": "nqn.2016-06.io.spdk:cnode32206", 00:09:59.286 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:59.286 "method": "nvmf_create_subsystem", 00:09:59.286 "req_id": 1 00:09:59.286 } 00:09:59.286 Got JSON-RPC error response 00:09:59.286 response: 00:09:59.286 { 00:09:59.286 "code": -32602, 00:09:59.286 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:59.286 }' 00:09:59.286 02:29:32 -- target/invalid.sh@46 -- # [[ request: 00:09:59.286 { 00:09:59.286 "nqn": "nqn.2016-06.io.spdk:cnode32206", 00:09:59.286 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:59.286 "method": "nvmf_create_subsystem", 00:09:59.286 "req_id": 1 00:09:59.286 } 00:09:59.286 Got JSON-RPC error response 00:09:59.286 response: 00:09:59.286 { 00:09:59.286 "code": -32602, 00:09:59.286 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:59.286 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:59.286 02:29:32 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:59.286 02:29:32 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12208 00:09:59.548 [2024-04-27 02:29:32.991439] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12208: invalid model number 'SPDK_Controller' 00:09:59.548 02:29:33 -- target/invalid.sh@50 -- # out='request: 00:09:59.548 { 00:09:59.548 "nqn": "nqn.2016-06.io.spdk:cnode12208", 00:09:59.548 "model_number": "SPDK_Controller\u001f", 00:09:59.548 "method": "nvmf_create_subsystem", 00:09:59.548 "req_id": 1 00:09:59.548 } 00:09:59.548 Got JSON-RPC error response 00:09:59.548 response: 00:09:59.548 { 00:09:59.548 "code": -32602, 00:09:59.548 "message": "Invalid MN SPDK_Controller\u001f" 00:09:59.548 }' 00:09:59.548 02:29:33 -- target/invalid.sh@51 -- # [[ request: 00:09:59.548 { 00:09:59.548 "nqn": "nqn.2016-06.io.spdk:cnode12208", 00:09:59.548 "model_number": "SPDK_Controller\u001f", 00:09:59.548 "method": "nvmf_create_subsystem", 00:09:59.548 "req_id": 1 00:09:59.548 } 00:09:59.548 Got JSON-RPC error response 00:09:59.548 response: 00:09:59.548 { 00:09:59.548 "code": -32602, 00:09:59.548 "message": "Invalid MN SPDK_Controller\u001f" 00:09:59.548 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:59.548 02:29:33 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:59.548 02:29:33 -- target/invalid.sh@19 -- # local length=21 ll 00:09:59.548 02:29:33 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:59.548 02:29:33 -- target/invalid.sh@21 -- # local chars 00:09:59.548 02:29:33 -- target/invalid.sh@22 -- # local string 00:09:59.548 02:29:33 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:59.548 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # printf %x 54 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # string+=6 00:09:59.548 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.548 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # printf %x 87 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # string+=W 00:09:59.548 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.548 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.548 02:29:33 -- target/invalid.sh@25 -- # printf %x 69 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=E 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 52 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=4 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 77 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=M 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 124 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+='|' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 125 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+='}' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 41 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=')' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 63 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+='?' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 68 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=D 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 39 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=\' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 88 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=X 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 47 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=/ 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 55 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=7 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 74 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=J 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 110 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=n 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 34 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+='"' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 55 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=7 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # printf %x 93 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:59.549 02:29:33 -- target/invalid.sh@25 -- # string+=']' 00:09:59.549 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 68 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=D 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 115 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=s 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:09:59.810 02:29:33 -- target/invalid.sh@31 -- # echo '6WE4M|})?D'\''X/7Jn"7]Ds' 00:09:59.810 02:29:33 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '6WE4M|})?D'\''X/7Jn"7]Ds' nqn.2016-06.io.spdk:cnode14516 00:09:59.810 [2024-04-27 02:29:33.328504] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14516: invalid serial number '6WE4M|})?D'X/7Jn"7]Ds' 00:09:59.810 02:29:33 -- target/invalid.sh@54 -- # out='request: 00:09:59.810 { 00:09:59.810 "nqn": "nqn.2016-06.io.spdk:cnode14516", 00:09:59.810 "serial_number": "6WE4M|})?D'\''X/7Jn\"7]Ds", 00:09:59.810 "method": "nvmf_create_subsystem", 00:09:59.810 "req_id": 1 00:09:59.810 } 00:09:59.810 Got JSON-RPC error response 00:09:59.810 response: 00:09:59.810 { 00:09:59.810 "code": -32602, 00:09:59.810 "message": "Invalid SN 6WE4M|})?D'\''X/7Jn\"7]Ds" 00:09:59.810 }' 00:09:59.810 02:29:33 -- target/invalid.sh@55 -- # [[ request: 00:09:59.810 { 00:09:59.810 "nqn": "nqn.2016-06.io.spdk:cnode14516", 00:09:59.810 "serial_number": "6WE4M|})?D'X/7Jn\"7]Ds", 00:09:59.810 "method": "nvmf_create_subsystem", 00:09:59.810 "req_id": 1 00:09:59.810 } 00:09:59.810 Got JSON-RPC error response 00:09:59.810 response: 00:09:59.810 { 00:09:59.810 "code": -32602, 00:09:59.810 "message": "Invalid SN 6WE4M|})?D'X/7Jn\"7]Ds" 00:09:59.810 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:59.810 02:29:33 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:59.810 02:29:33 -- target/invalid.sh@19 -- # local length=41 ll 00:09:59.810 02:29:33 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:59.810 02:29:33 -- target/invalid.sh@21 -- # local chars 00:09:59.810 02:29:33 -- target/invalid.sh@22 -- # local string 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 66 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=B 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 100 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=d 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 93 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=']' 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 73 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=I 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 74 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=J 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 120 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=x 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 112 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=p 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 48 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+=0 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.810 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # printf %x 38 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:59.810 02:29:33 -- target/invalid.sh@25 -- # string+='&' 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 107 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=k 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 53 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=5 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 38 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+='&' 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 80 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=P 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 50 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=2 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 58 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=: 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 99 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=c 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 126 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+='~' 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 81 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=Q 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 95 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=_ 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 33 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+='!' 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 61 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+== 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 73 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=I 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 47 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=/ 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 76 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=L 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 84 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=T 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 111 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=o 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 90 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=Z 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 58 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=: 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # printf %x 57 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:00.071 02:29:33 -- target/invalid.sh@25 -- # string+=9 00:10:00.071 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 126 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+='~' 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 48 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=0 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 68 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=D 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 73 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=I 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 109 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=m 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 86 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=V 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 86 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=V 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 41 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=')' 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 90 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=Z 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 120 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=x 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 70 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=F 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # printf %x 110 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:00.072 02:29:33 -- target/invalid.sh@25 -- # string+=n 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.072 02:29:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.072 02:29:33 -- target/invalid.sh@28 -- # [[ B == \- ]] 00:10:00.072 02:29:33 -- target/invalid.sh@31 -- # echo 'Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn' 00:10:00.072 02:29:33 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn' nqn.2016-06.io.spdk:cnode10485 00:10:00.333 [2024-04-27 02:29:33.806066] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10485: invalid model number 'Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn' 00:10:00.333 02:29:33 -- target/invalid.sh@58 -- # out='request: 00:10:00.333 { 00:10:00.333 "nqn": "nqn.2016-06.io.spdk:cnode10485", 00:10:00.333 "model_number": "Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn", 00:10:00.333 "method": "nvmf_create_subsystem", 00:10:00.333 "req_id": 1 00:10:00.333 } 00:10:00.333 Got JSON-RPC error response 00:10:00.333 response: 00:10:00.333 { 00:10:00.333 "code": -32602, 00:10:00.333 "message": "Invalid MN Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn" 00:10:00.333 }' 00:10:00.333 02:29:33 -- target/invalid.sh@59 -- # [[ request: 00:10:00.333 { 00:10:00.333 "nqn": "nqn.2016-06.io.spdk:cnode10485", 00:10:00.333 "model_number": "Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn", 00:10:00.333 "method": "nvmf_create_subsystem", 00:10:00.333 "req_id": 1 00:10:00.333 } 00:10:00.333 Got JSON-RPC error response 00:10:00.333 response: 00:10:00.333 { 00:10:00.333 "code": -32602, 00:10:00.333 "message": "Invalid MN Bd]IJxp0&k5&P2:c~Q_!=I/LToZ:9~0DImVV)ZxFn" 00:10:00.333 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:00.333 02:29:33 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:00.628 [2024-04-27 02:29:33.978705] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.628 02:29:34 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:00.628 02:29:34 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:00.628 02:29:34 -- target/invalid.sh@67 -- # echo '' 00:10:00.628 02:29:34 -- target/invalid.sh@67 -- # head -n 1 00:10:00.628 02:29:34 -- target/invalid.sh@67 -- # IP= 00:10:00.628 02:29:34 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:00.915 [2024-04-27 02:29:34.331850] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:00.915 02:29:34 -- target/invalid.sh@69 -- # out='request: 00:10:00.915 { 00:10:00.915 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:00.915 "listen_address": { 00:10:00.915 "trtype": "tcp", 00:10:00.915 "traddr": "", 00:10:00.915 "trsvcid": "4421" 00:10:00.915 }, 00:10:00.915 "method": "nvmf_subsystem_remove_listener", 00:10:00.915 "req_id": 1 00:10:00.915 } 00:10:00.915 Got JSON-RPC error response 00:10:00.915 response: 00:10:00.915 { 00:10:00.915 "code": -32602, 00:10:00.915 "message": "Invalid parameters" 00:10:00.915 }' 00:10:00.915 02:29:34 -- target/invalid.sh@70 -- # [[ request: 00:10:00.915 { 00:10:00.915 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:00.915 "listen_address": { 00:10:00.915 "trtype": "tcp", 00:10:00.915 "traddr": "", 00:10:00.915 "trsvcid": "4421" 00:10:00.915 }, 00:10:00.915 "method": "nvmf_subsystem_remove_listener", 00:10:00.915 "req_id": 1 00:10:00.915 } 00:10:00.915 Got JSON-RPC error response 00:10:00.915 response: 00:10:00.915 { 00:10:00.915 "code": -32602, 00:10:00.915 "message": "Invalid parameters" 00:10:00.915 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:00.915 02:29:34 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25392 -i 0 00:10:00.915 [2024-04-27 02:29:34.496328] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25392: invalid cntlid range [0-65519] 00:10:00.915 02:29:34 -- target/invalid.sh@73 -- # out='request: 00:10:00.915 { 00:10:00.915 "nqn": "nqn.2016-06.io.spdk:cnode25392", 00:10:00.915 "min_cntlid": 0, 00:10:00.916 "method": "nvmf_create_subsystem", 00:10:00.916 "req_id": 1 00:10:00.916 } 00:10:00.916 Got JSON-RPC error response 00:10:00.916 response: 00:10:00.916 { 00:10:00.916 "code": -32602, 00:10:00.916 "message": "Invalid cntlid range [0-65519]" 00:10:00.916 }' 00:10:00.916 02:29:34 -- target/invalid.sh@74 -- # [[ request: 00:10:00.916 { 00:10:00.916 "nqn": "nqn.2016-06.io.spdk:cnode25392", 00:10:00.916 "min_cntlid": 0, 00:10:00.916 "method": "nvmf_create_subsystem", 00:10:00.916 "req_id": 1 00:10:00.916 } 00:10:00.916 Got JSON-RPC error response 00:10:00.916 response: 00:10:00.916 { 00:10:00.916 "code": -32602, 00:10:00.916 "message": "Invalid cntlid range [0-65519]" 00:10:00.916 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:00.916 02:29:34 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode628 -i 65520 00:10:01.175 [2024-04-27 02:29:34.668874] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode628: invalid cntlid range [65520-65519] 00:10:01.175 02:29:34 -- target/invalid.sh@75 -- # out='request: 00:10:01.176 { 00:10:01.176 "nqn": "nqn.2016-06.io.spdk:cnode628", 00:10:01.176 "min_cntlid": 65520, 00:10:01.176 "method": "nvmf_create_subsystem", 00:10:01.176 "req_id": 1 00:10:01.176 } 00:10:01.176 Got JSON-RPC error response 00:10:01.176 response: 00:10:01.176 { 00:10:01.176 "code": -32602, 00:10:01.176 "message": "Invalid cntlid range [65520-65519]" 00:10:01.176 }' 00:10:01.176 02:29:34 -- target/invalid.sh@76 -- # [[ request: 00:10:01.176 { 00:10:01.176 "nqn": "nqn.2016-06.io.spdk:cnode628", 00:10:01.176 "min_cntlid": 65520, 00:10:01.176 "method": "nvmf_create_subsystem", 00:10:01.176 "req_id": 1 00:10:01.176 } 00:10:01.176 Got JSON-RPC error response 00:10:01.176 response: 00:10:01.176 { 00:10:01.176 "code": -32602, 00:10:01.176 "message": "Invalid cntlid range [65520-65519]" 00:10:01.176 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.176 02:29:34 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16517 -I 0 00:10:01.437 [2024-04-27 02:29:34.845466] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16517: invalid cntlid range [1-0] 00:10:01.437 02:29:34 -- target/invalid.sh@77 -- # out='request: 00:10:01.437 { 00:10:01.437 "nqn": "nqn.2016-06.io.spdk:cnode16517", 00:10:01.437 "max_cntlid": 0, 00:10:01.437 "method": "nvmf_create_subsystem", 00:10:01.437 "req_id": 1 00:10:01.437 } 00:10:01.437 Got JSON-RPC error response 00:10:01.437 response: 00:10:01.437 { 00:10:01.437 "code": -32602, 00:10:01.437 "message": "Invalid cntlid range [1-0]" 00:10:01.437 }' 00:10:01.437 02:29:34 -- target/invalid.sh@78 -- # [[ request: 00:10:01.437 { 00:10:01.437 "nqn": "nqn.2016-06.io.spdk:cnode16517", 00:10:01.437 "max_cntlid": 0, 00:10:01.437 "method": "nvmf_create_subsystem", 00:10:01.437 "req_id": 1 00:10:01.437 } 00:10:01.437 Got JSON-RPC error response 00:10:01.437 response: 00:10:01.437 { 00:10:01.437 "code": -32602, 00:10:01.437 "message": "Invalid cntlid range [1-0]" 00:10:01.437 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.437 02:29:34 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20584 -I 65520 00:10:01.437 [2024-04-27 02:29:35.021995] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20584: invalid cntlid range [1-65520] 00:10:01.437 02:29:35 -- target/invalid.sh@79 -- # out='request: 00:10:01.437 { 00:10:01.437 "nqn": "nqn.2016-06.io.spdk:cnode20584", 00:10:01.437 "max_cntlid": 65520, 00:10:01.437 "method": "nvmf_create_subsystem", 00:10:01.437 "req_id": 1 00:10:01.437 } 00:10:01.437 Got JSON-RPC error response 00:10:01.437 response: 00:10:01.437 { 00:10:01.437 "code": -32602, 00:10:01.437 "message": "Invalid cntlid range [1-65520]" 00:10:01.437 }' 00:10:01.437 02:29:35 -- target/invalid.sh@80 -- # [[ request: 00:10:01.437 { 00:10:01.437 "nqn": "nqn.2016-06.io.spdk:cnode20584", 00:10:01.437 "max_cntlid": 65520, 00:10:01.437 "method": "nvmf_create_subsystem", 00:10:01.437 "req_id": 1 00:10:01.437 } 00:10:01.437 Got JSON-RPC error response 00:10:01.437 response: 00:10:01.437 { 00:10:01.437 "code": -32602, 00:10:01.437 "message": "Invalid cntlid range [1-65520]" 00:10:01.437 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.437 02:29:35 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9976 -i 6 -I 5 00:10:01.697 [2024-04-27 02:29:35.198591] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9976: invalid cntlid range [6-5] 00:10:01.697 02:29:35 -- target/invalid.sh@83 -- # out='request: 00:10:01.697 { 00:10:01.697 "nqn": "nqn.2016-06.io.spdk:cnode9976", 00:10:01.697 "min_cntlid": 6, 00:10:01.697 "max_cntlid": 5, 00:10:01.697 "method": "nvmf_create_subsystem", 00:10:01.697 "req_id": 1 00:10:01.697 } 00:10:01.697 Got JSON-RPC error response 00:10:01.697 response: 00:10:01.697 { 00:10:01.697 "code": -32602, 00:10:01.697 "message": "Invalid cntlid range [6-5]" 00:10:01.697 }' 00:10:01.697 02:29:35 -- target/invalid.sh@84 -- # [[ request: 00:10:01.697 { 00:10:01.697 "nqn": "nqn.2016-06.io.spdk:cnode9976", 00:10:01.697 "min_cntlid": 6, 00:10:01.697 "max_cntlid": 5, 00:10:01.697 "method": "nvmf_create_subsystem", 00:10:01.697 "req_id": 1 00:10:01.697 } 00:10:01.697 Got JSON-RPC error response 00:10:01.697 response: 00:10:01.697 { 00:10:01.697 "code": -32602, 00:10:01.697 "message": "Invalid cntlid range [6-5]" 00:10:01.697 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.697 02:29:35 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:01.958 02:29:35 -- target/invalid.sh@87 -- # out='request: 00:10:01.958 { 00:10:01.958 "name": "foobar", 00:10:01.958 "method": "nvmf_delete_target", 00:10:01.958 "req_id": 1 00:10:01.958 } 00:10:01.958 Got JSON-RPC error response 00:10:01.958 response: 00:10:01.958 { 00:10:01.958 "code": -32602, 00:10:01.958 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:01.958 }' 00:10:01.958 02:29:35 -- target/invalid.sh@88 -- # [[ request: 00:10:01.958 { 00:10:01.958 "name": "foobar", 00:10:01.958 "method": "nvmf_delete_target", 00:10:01.958 "req_id": 1 00:10:01.958 } 00:10:01.958 Got JSON-RPC error response 00:10:01.958 response: 00:10:01.958 { 00:10:01.958 "code": -32602, 00:10:01.958 "message": "The specified target doesn't exist, cannot delete it." 00:10:01.958 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:01.958 02:29:35 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:01.958 02:29:35 -- target/invalid.sh@91 -- # nvmftestfini 00:10:01.958 02:29:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:01.958 02:29:35 -- nvmf/common.sh@117 -- # sync 00:10:01.958 02:29:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.958 02:29:35 -- nvmf/common.sh@120 -- # set +e 00:10:01.958 02:29:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.958 02:29:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.958 rmmod nvme_tcp 00:10:01.958 rmmod nvme_fabrics 00:10:01.958 rmmod nvme_keyring 00:10:01.958 02:29:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.958 02:29:35 -- nvmf/common.sh@124 -- # set -e 00:10:01.958 02:29:35 -- nvmf/common.sh@125 -- # return 0 00:10:01.958 02:29:35 -- nvmf/common.sh@478 -- # '[' -n 4183306 ']' 00:10:01.958 02:29:35 -- nvmf/common.sh@479 -- # killprocess 4183306 00:10:01.958 02:29:35 -- common/autotest_common.sh@936 -- # '[' -z 4183306 ']' 00:10:01.958 02:29:35 -- common/autotest_common.sh@940 -- # kill -0 4183306 00:10:01.958 02:29:35 -- common/autotest_common.sh@941 -- # uname 00:10:01.958 02:29:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:01.958 02:29:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4183306 00:10:01.958 02:29:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:01.958 02:29:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:01.958 02:29:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4183306' 00:10:01.958 killing process with pid 4183306 00:10:01.958 02:29:35 -- common/autotest_common.sh@955 -- # kill 4183306 00:10:01.958 02:29:35 -- common/autotest_common.sh@960 -- # wait 4183306 00:10:02.219 02:29:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:02.219 02:29:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:02.219 02:29:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:02.219 02:29:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.219 02:29:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.219 02:29:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.219 02:29:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.219 02:29:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.134 02:29:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.134 00:10:04.134 real 0m13.312s 00:10:04.134 user 0m19.187s 00:10:04.134 sys 0m6.249s 00:10:04.134 02:29:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:04.134 02:29:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.134 ************************************ 00:10:04.134 END TEST nvmf_invalid 00:10:04.134 ************************************ 00:10:04.134 02:29:37 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:04.134 02:29:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:04.134 02:29:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.134 02:29:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.396 ************************************ 00:10:04.396 START TEST nvmf_abort 00:10:04.396 ************************************ 00:10:04.396 02:29:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:04.396 * Looking for test storage... 00:10:04.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.396 02:29:37 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.396 02:29:37 -- nvmf/common.sh@7 -- # uname -s 00:10:04.396 02:29:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.396 02:29:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.396 02:29:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.396 02:29:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.396 02:29:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.396 02:29:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.396 02:29:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.396 02:29:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.396 02:29:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.396 02:29:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.396 02:29:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:04.396 02:29:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:04.396 02:29:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.396 02:29:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.396 02:29:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.396 02:29:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.396 02:29:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.396 02:29:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.396 02:29:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.396 02:29:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.396 02:29:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.396 02:29:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.396 02:29:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.396 02:29:37 -- paths/export.sh@5 -- # export PATH 00:10:04.396 02:29:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.396 02:29:37 -- nvmf/common.sh@47 -- # : 0 00:10:04.396 02:29:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.396 02:29:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.396 02:29:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.396 02:29:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.396 02:29:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.396 02:29:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.396 02:29:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.396 02:29:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.396 02:29:37 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.396 02:29:37 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:04.396 02:29:37 -- target/abort.sh@14 -- # nvmftestinit 00:10:04.396 02:29:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:04.396 02:29:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.396 02:29:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:04.396 02:29:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:04.396 02:29:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:04.396 02:29:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.396 02:29:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.396 02:29:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.396 02:29:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:04.396 02:29:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:04.396 02:29:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.396 02:29:37 -- common/autotest_common.sh@10 -- # set +x 00:10:10.989 02:29:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:10.989 02:29:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.989 02:29:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.989 02:29:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.989 02:29:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.989 02:29:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.989 02:29:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.989 02:29:44 -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.989 02:29:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.989 02:29:44 -- nvmf/common.sh@296 -- # e810=() 00:10:10.989 02:29:44 -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.989 02:29:44 -- nvmf/common.sh@297 -- # x722=() 00:10:10.989 02:29:44 -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.989 02:29:44 -- nvmf/common.sh@298 -- # mlx=() 00:10:10.989 02:29:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.989 02:29:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.989 02:29:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.989 02:29:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.989 02:29:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.989 02:29:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.989 02:29:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:10.989 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:10.989 02:29:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.989 02:29:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:10.989 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:10.989 02:29:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.989 02:29:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.989 02:29:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.989 02:29:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:10.989 02:29:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.989 02:29:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:10.989 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:10.989 02:29:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.989 02:29:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.989 02:29:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.989 02:29:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:10.989 02:29:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.989 02:29:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:10.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:10.989 02:29:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.989 02:29:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:10.989 02:29:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:10.989 02:29:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:10.989 02:29:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:10.989 02:29:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.989 02:29:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.989 02:29:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.989 02:29:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.989 02:29:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.989 02:29:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.989 02:29:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.989 02:29:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.989 02:29:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.989 02:29:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.989 02:29:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.989 02:29:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.989 02:29:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.250 02:29:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.250 02:29:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.250 02:29:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.250 02:29:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.250 02:29:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.250 02:29:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.250 02:29:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:10:11.250 00:10:11.250 --- 10.0.0.2 ping statistics --- 00:10:11.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.250 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:10:11.250 02:29:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:10:11.250 00:10:11.250 --- 10.0.0.1 ping statistics --- 00:10:11.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.250 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:10:11.250 02:29:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.250 02:29:44 -- nvmf/common.sh@411 -- # return 0 00:10:11.250 02:29:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:11.250 02:29:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.250 02:29:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:11.250 02:29:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:11.250 02:29:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.250 02:29:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:11.250 02:29:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:11.250 02:29:44 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:11.250 02:29:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:11.250 02:29:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:11.250 02:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 02:29:44 -- nvmf/common.sh@470 -- # nvmfpid=4188334 00:10:11.250 02:29:44 -- nvmf/common.sh@471 -- # waitforlisten 4188334 00:10:11.250 02:29:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:11.250 02:29:44 -- common/autotest_common.sh@817 -- # '[' -z 4188334 ']' 00:10:11.250 02:29:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.250 02:29:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:11.250 02:29:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.250 02:29:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:11.250 02:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:11.511 [2024-04-27 02:29:44.882962] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:11.511 [2024-04-27 02:29:44.883044] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.511 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.511 [2024-04-27 02:29:44.954521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.511 [2024-04-27 02:29:45.027020] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.511 [2024-04-27 02:29:45.027054] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.511 [2024-04-27 02:29:45.027061] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.511 [2024-04-27 02:29:45.027067] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.511 [2024-04-27 02:29:45.027073] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.511 [2024-04-27 02:29:45.027179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.511 [2024-04-27 02:29:45.027319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.511 [2024-04-27 02:29:45.027479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.083 02:29:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:12.083 02:29:45 -- common/autotest_common.sh@850 -- # return 0 00:10:12.083 02:29:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:12.083 02:29:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:12.083 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.083 02:29:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.083 02:29:45 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 [2024-04-27 02:29:45.711581] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 Malloc0 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 Delay0 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 [2024-04-27 02:29:45.788600] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:12.344 02:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.344 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.344 02:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.344 02:29:45 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:12.344 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.344 [2024-04-27 02:29:45.903077] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:14.902 Initializing NVMe Controllers 00:10:14.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:14.902 controller IO queue size 128 less than required 00:10:14.902 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:14.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:14.902 Initialization complete. Launching workers. 00:10:14.902 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27605 00:10:14.902 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27666, failed to submit 62 00:10:14.902 success 27609, unsuccess 57, failed 0 00:10:14.902 02:29:48 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:14.903 02:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:14.903 02:29:48 -- common/autotest_common.sh@10 -- # set +x 00:10:14.903 02:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:14.903 02:29:48 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:14.903 02:29:48 -- target/abort.sh@38 -- # nvmftestfini 00:10:14.903 02:29:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:14.903 02:29:48 -- nvmf/common.sh@117 -- # sync 00:10:14.903 02:29:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.903 02:29:48 -- nvmf/common.sh@120 -- # set +e 00:10:14.903 02:29:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.903 02:29:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.903 rmmod nvme_tcp 00:10:14.903 rmmod nvme_fabrics 00:10:14.903 rmmod nvme_keyring 00:10:14.903 02:29:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.903 02:29:48 -- nvmf/common.sh@124 -- # set -e 00:10:14.903 02:29:48 -- nvmf/common.sh@125 -- # return 0 00:10:14.903 02:29:48 -- nvmf/common.sh@478 -- # '[' -n 4188334 ']' 00:10:14.903 02:29:48 -- nvmf/common.sh@479 -- # killprocess 4188334 00:10:14.903 02:29:48 -- common/autotest_common.sh@936 -- # '[' -z 4188334 ']' 00:10:14.903 02:29:48 -- common/autotest_common.sh@940 -- # kill -0 4188334 00:10:14.903 02:29:48 -- common/autotest_common.sh@941 -- # uname 00:10:14.903 02:29:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:14.903 02:29:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4188334 00:10:14.903 02:29:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:14.903 02:29:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:14.903 02:29:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4188334' 00:10:14.903 killing process with pid 4188334 00:10:14.903 02:29:48 -- common/autotest_common.sh@955 -- # kill 4188334 00:10:14.903 02:29:48 -- common/autotest_common.sh@960 -- # wait 4188334 00:10:14.903 02:29:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:14.903 02:29:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:14.903 02:29:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:14.903 02:29:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.903 02:29:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.903 02:29:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.903 02:29:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.903 02:29:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.853 02:29:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:16.853 00:10:16.853 real 0m12.594s 00:10:16.853 user 0m13.678s 00:10:16.853 sys 0m6.052s 00:10:16.853 02:29:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:16.853 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 ************************************ 00:10:16.853 END TEST nvmf_abort 00:10:16.853 ************************************ 00:10:17.114 02:29:50 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:17.114 02:29:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:17.114 02:29:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.114 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 ************************************ 00:10:17.114 START TEST nvmf_ns_hotplug_stress 00:10:17.114 ************************************ 00:10:17.114 02:29:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:17.114 * Looking for test storage... 00:10:17.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.114 02:29:50 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.114 02:29:50 -- nvmf/common.sh@7 -- # uname -s 00:10:17.375 02:29:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.375 02:29:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.376 02:29:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.376 02:29:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.376 02:29:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.376 02:29:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.376 02:29:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.376 02:29:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.376 02:29:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.376 02:29:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.376 02:29:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.376 02:29:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.376 02:29:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.376 02:29:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.376 02:29:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.376 02:29:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.376 02:29:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.376 02:29:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.376 02:29:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.376 02:29:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.376 02:29:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.376 02:29:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.376 02:29:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.376 02:29:50 -- paths/export.sh@5 -- # export PATH 00:10:17.376 02:29:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.376 02:29:50 -- nvmf/common.sh@47 -- # : 0 00:10:17.376 02:29:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.376 02:29:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.376 02:29:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.376 02:29:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.376 02:29:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.376 02:29:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.376 02:29:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.376 02:29:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.376 02:29:50 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.376 02:29:50 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:10:17.376 02:29:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:17.376 02:29:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.376 02:29:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:17.376 02:29:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:17.376 02:29:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:17.376 02:29:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.376 02:29:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.376 02:29:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.376 02:29:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:17.376 02:29:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:17.376 02:29:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.376 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:23.970 02:29:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:23.970 02:29:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:23.970 02:29:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:23.970 02:29:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:23.970 02:29:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:23.970 02:29:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:23.970 02:29:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:23.970 02:29:57 -- nvmf/common.sh@295 -- # net_devs=() 00:10:23.970 02:29:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:23.970 02:29:57 -- nvmf/common.sh@296 -- # e810=() 00:10:23.970 02:29:57 -- nvmf/common.sh@296 -- # local -ga e810 00:10:23.970 02:29:57 -- nvmf/common.sh@297 -- # x722=() 00:10:23.970 02:29:57 -- nvmf/common.sh@297 -- # local -ga x722 00:10:23.970 02:29:57 -- nvmf/common.sh@298 -- # mlx=() 00:10:23.970 02:29:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:23.970 02:29:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.970 02:29:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:23.970 02:29:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:23.970 02:29:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:23.970 02:29:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.970 02:29:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:23.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:23.970 02:29:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.970 02:29:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:23.970 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:23.970 02:29:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:23.970 02:29:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:23.970 02:29:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:23.971 02:29:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.971 02:29:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.971 02:29:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:23.971 02:29:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.971 02:29:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:23.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:23.971 02:29:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.971 02:29:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.971 02:29:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.971 02:29:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:23.971 02:29:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.971 02:29:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:23.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:23.971 02:29:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.971 02:29:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:23.971 02:29:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:23.971 02:29:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:23.971 02:29:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:23.971 02:29:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:23.971 02:29:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.971 02:29:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.971 02:29:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.971 02:29:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:23.971 02:29:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.971 02:29:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.971 02:29:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:23.971 02:29:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.971 02:29:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.971 02:29:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:23.971 02:29:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:23.971 02:29:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.971 02:29:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.971 02:29:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.971 02:29:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.971 02:29:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:23.971 02:29:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.232 02:29:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.232 02:29:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.232 02:29:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:24.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:10:24.232 00:10:24.232 --- 10.0.0.2 ping statistics --- 00:10:24.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.232 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:10:24.232 02:29:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:10:24.232 00:10:24.232 --- 10.0.0.1 ping statistics --- 00:10:24.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.232 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:10:24.232 02:29:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.232 02:29:57 -- nvmf/common.sh@411 -- # return 0 00:10:24.232 02:29:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:24.232 02:29:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.232 02:29:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:24.232 02:29:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:24.232 02:29:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.232 02:29:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:24.232 02:29:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:24.232 02:29:57 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:24.232 02:29:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:24.232 02:29:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:24.232 02:29:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.232 02:29:57 -- nvmf/common.sh@470 -- # nvmfpid=4193321 00:10:24.232 02:29:57 -- nvmf/common.sh@471 -- # waitforlisten 4193321 00:10:24.232 02:29:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:24.232 02:29:57 -- common/autotest_common.sh@817 -- # '[' -z 4193321 ']' 00:10:24.232 02:29:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.232 02:29:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:24.232 02:29:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.232 02:29:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:24.232 02:29:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.232 [2024-04-27 02:29:57.813533] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:24.232 [2024-04-27 02:29:57.813600] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.232 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.493 [2024-04-27 02:29:57.885222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.493 [2024-04-27 02:29:57.956393] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.493 [2024-04-27 02:29:57.956430] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.493 [2024-04-27 02:29:57.956438] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.493 [2024-04-27 02:29:57.956444] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.493 [2024-04-27 02:29:57.956450] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.493 [2024-04-27 02:29:57.956564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.493 [2024-04-27 02:29:57.956703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.493 [2024-04-27 02:29:57.956706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.064 02:29:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:25.064 02:29:58 -- common/autotest_common.sh@850 -- # return 0 00:10:25.064 02:29:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:25.064 02:29:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:25.064 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:25.064 02:29:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.064 02:29:58 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:25.064 02:29:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:25.325 [2024-04-27 02:29:58.756786] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.325 02:29:58 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:25.586 02:29:58 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.586 [2024-04-27 02:29:59.094186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.586 02:29:59 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.847 02:29:59 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:25.847 Malloc0 00:10:26.108 02:29:59 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:26.108 Delay0 00:10:26.108 02:29:59 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.370 02:29:59 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:26.370 NULL1 00:10:26.370 02:29:59 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:26.631 02:30:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:26.631 02:30:00 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=4193738 00:10:26.631 02:30:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:26.631 02:30:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.631 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.893 02:30:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.893 02:30:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:26.893 02:30:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:27.154 [2024-04-27 02:30:00.604381] bdev.c:4971:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:27.154 true 00:10:27.154 02:30:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:27.154 02:30:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.415 02:30:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.415 02:30:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:27.415 02:30:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:27.676 true 00:10:27.676 02:30:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:27.676 02:30:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.937 02:30:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.937 02:30:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:27.937 02:30:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:28.196 true 00:10:28.196 02:30:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:28.196 02:30:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.457 02:30:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.457 02:30:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:28.457 02:30:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:28.717 true 00:10:28.717 02:30:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:28.717 02:30:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.978 02:30:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.978 02:30:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:28.978 02:30:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:29.238 true 00:10:29.238 02:30:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:29.238 02:30:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.504 02:30:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.504 02:30:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:29.504 02:30:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:29.763 true 00:10:29.763 02:30:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:29.763 02:30:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.024 02:30:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.024 02:30:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:30.024 02:30:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:30.285 true 00:10:30.285 02:30:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:30.285 02:30:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.285 02:30:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.546 02:30:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:30.546 02:30:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:30.807 true 00:10:30.807 02:30:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:30.807 02:30:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.807 02:30:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.068 02:30:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:31.068 02:30:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:31.329 true 00:10:31.329 02:30:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:31.329 02:30:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.329 02:30:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.591 02:30:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:31.591 02:30:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:31.852 true 00:10:31.852 02:30:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:31.852 02:30:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.852 02:30:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.114 02:30:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:32.114 02:30:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:32.114 true 00:10:32.375 02:30:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:32.375 02:30:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.375 02:30:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.636 02:30:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:32.636 02:30:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:32.636 true 00:10:32.898 02:30:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:32.898 02:30:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.898 02:30:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.159 02:30:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:33.159 02:30:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:33.159 true 00:10:33.159 02:30:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:33.159 02:30:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.420 02:30:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.681 02:30:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:33.681 02:30:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:33.681 true 00:10:33.681 02:30:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:33.681 02:30:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.941 02:30:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.230 02:30:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:34.230 02:30:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:34.230 true 00:10:34.230 02:30:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:34.230 02:30:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.497 02:30:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.758 02:30:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:34.758 02:30:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:34.758 true 00:10:34.758 02:30:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:34.758 02:30:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.020 02:30:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.020 02:30:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:35.020 02:30:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:35.281 true 00:10:35.281 02:30:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:35.281 02:30:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.543 02:30:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.543 02:30:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:35.543 02:30:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:35.804 true 00:10:35.804 02:30:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:35.804 02:30:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.065 02:30:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.065 02:30:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:36.065 02:30:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:36.326 true 00:10:36.326 02:30:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:36.326 02:30:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.587 02:30:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.587 02:30:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:36.587 02:30:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:36.849 true 00:10:36.849 02:30:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:36.849 02:30:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.110 02:30:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.110 02:30:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:37.110 02:30:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:37.372 true 00:10:37.372 02:30:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:37.372 02:30:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.633 02:30:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.633 02:30:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:37.633 02:30:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:37.895 true 00:10:37.895 02:30:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:37.895 02:30:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.156 02:30:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.156 02:30:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:38.156 02:30:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:38.417 true 00:10:38.417 02:30:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:38.417 02:30:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.417 02:30:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.677 02:30:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:38.677 02:30:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:38.938 true 00:10:38.938 02:30:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:38.938 02:30:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.938 02:30:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.199 02:30:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:39.199 02:30:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:39.464 true 00:10:39.464 02:30:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:39.464 02:30:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.464 02:30:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.724 02:30:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:39.724 02:30:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:39.984 true 00:10:39.984 02:30:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:39.984 02:30:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.984 02:30:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.244 02:30:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:40.244 02:30:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:40.244 true 00:10:40.503 02:30:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:40.503 02:30:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.503 02:30:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.764 02:30:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:40.764 02:30:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:40.764 true 00:10:41.024 02:30:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:41.024 02:30:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.024 02:30:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.286 02:30:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:41.286 02:30:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:41.286 true 00:10:41.547 02:30:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:41.548 02:30:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.548 02:30:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.809 02:30:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:41.809 02:30:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:41.809 true 00:10:41.809 02:30:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:41.809 02:30:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.069 02:30:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.330 02:30:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:42.330 02:30:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:42.330 true 00:10:42.330 02:30:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:42.330 02:30:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.592 02:30:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.853 02:30:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:42.853 02:30:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:42.853 true 00:10:42.853 02:30:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:42.853 02:30:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.115 02:30:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.376 02:30:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:43.376 02:30:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:43.376 true 00:10:43.376 02:30:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:43.376 02:30:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.637 02:30:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.897 02:30:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:43.897 02:30:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:43.897 true 00:10:43.897 02:30:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:43.897 02:30:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.158 02:30:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.419 02:30:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:10:44.419 02:30:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:44.419 true 00:10:44.419 02:30:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:44.419 02:30:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.681 02:30:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.942 02:30:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:10:44.942 02:30:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:44.942 true 00:10:44.942 02:30:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:44.942 02:30:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.203 02:30:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.464 02:30:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:10:45.464 02:30:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:45.464 true 00:10:45.464 02:30:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:45.464 02:30:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.725 02:30:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.725 02:30:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:10:45.725 02:30:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:45.986 true 00:10:45.986 02:30:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:45.986 02:30:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.247 02:30:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.247 02:30:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:10:46.247 02:30:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:46.508 true 00:10:46.508 02:30:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:46.508 02:30:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.768 02:30:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.768 02:30:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:10:46.768 02:30:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:47.029 true 00:10:47.029 02:30:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:47.029 02:30:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.289 02:30:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.289 02:30:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:10:47.289 02:30:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:47.551 true 00:10:47.551 02:30:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:47.551 02:30:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.812 02:30:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.812 02:30:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:10:47.812 02:30:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:48.073 true 00:10:48.073 02:30:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:48.073 02:30:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.334 02:30:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.334 02:30:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:10:48.334 02:30:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:48.594 true 00:10:48.594 02:30:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:48.595 02:30:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.856 02:30:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.856 02:30:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:10:48.856 02:30:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:49.118 true 00:10:49.118 02:30:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:49.118 02:30:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.380 02:30:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.380 02:30:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:10:49.380 02:30:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:49.640 true 00:10:49.640 02:30:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:49.640 02:30:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.901 02:30:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.901 02:30:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:10:49.901 02:30:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:50.162 true 00:10:50.162 02:30:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:50.162 02:30:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.422 02:30:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.422 02:30:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:10:50.422 02:30:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:50.683 true 00:10:50.683 02:30:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:50.683 02:30:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.945 02:30:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.945 02:30:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:10:50.945 02:30:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:51.207 true 00:10:51.207 02:30:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:51.207 02:30:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.207 02:30:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.467 02:30:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:10:51.467 02:30:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:51.728 true 00:10:51.728 02:30:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:51.728 02:30:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.728 02:30:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.988 02:30:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:10:51.988 02:30:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:52.249 true 00:10:52.249 02:30:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:52.249 02:30:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.249 02:30:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.510 02:30:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:10:52.510 02:30:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:52.771 true 00:10:52.771 02:30:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:52.771 02:30:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.771 02:30:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.033 02:30:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:10:53.033 02:30:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:53.294 true 00:10:53.294 02:30:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:53.294 02:30:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.294 02:30:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.555 02:30:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:10:53.555 02:30:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:53.816 true 00:10:53.816 02:30:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:53.816 02:30:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.816 02:30:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.077 02:30:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:10:54.077 02:30:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:54.077 true 00:10:54.338 02:30:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:54.338 02:30:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.338 02:30:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.599 02:30:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:10:54.599 02:30:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:54.599 true 00:10:54.599 02:30:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:54.599 02:30:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.860 02:30:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.121 02:30:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:10:55.121 02:30:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:55.121 true 00:10:55.121 02:30:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:55.121 02:30:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.382 02:30:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.643 02:30:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:10:55.643 02:30:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:55.643 true 00:10:55.643 02:30:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:55.643 02:30:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.905 02:30:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.167 02:30:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:10:56.167 02:30:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:56.167 true 00:10:56.167 02:30:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:56.167 02:30:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.428 02:30:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.689 02:30:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:10:56.689 02:30:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:56.689 true 00:10:56.689 02:30:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:56.689 02:30:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.968 Initializing NVMe Controllers 00:10:56.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.968 Controller IO queue size 128, less than required. 00:10:56.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:56.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:56.968 Initialization complete. Launching workers. 00:10:56.968 ======================================================== 00:10:56.968 Latency(us) 00:10:56.968 Device Information : IOPS MiB/s Average min max 00:10:56.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 21102.65 10.30 6065.70 3526.10 10560.68 00:10:56.968 ======================================================== 00:10:56.968 Total : 21102.65 10.30 6065.70 3526.10 10560.68 00:10:56.968 00:10:56.968 02:30:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.294 02:30:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:10:57.294 02:30:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:57.294 true 00:10:57.294 02:30:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4193738 00:10:57.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (4193738) - No such process 00:10:57.294 02:30:30 -- target/ns_hotplug_stress.sh@44 -- # wait 4193738 00:10:57.294 02:30:30 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:57.294 02:30:30 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:57.294 02:30:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:57.294 02:30:30 -- nvmf/common.sh@117 -- # sync 00:10:57.294 02:30:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.294 02:30:30 -- nvmf/common.sh@120 -- # set +e 00:10:57.294 02:30:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.294 02:30:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.294 rmmod nvme_tcp 00:10:57.294 rmmod nvme_fabrics 00:10:57.294 rmmod nvme_keyring 00:10:57.294 02:30:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:57.294 02:30:30 -- nvmf/common.sh@124 -- # set -e 00:10:57.294 02:30:30 -- nvmf/common.sh@125 -- # return 0 00:10:57.294 02:30:30 -- nvmf/common.sh@478 -- # '[' -n 4193321 ']' 00:10:57.294 02:30:30 -- nvmf/common.sh@479 -- # killprocess 4193321 00:10:57.294 02:30:30 -- common/autotest_common.sh@936 -- # '[' -z 4193321 ']' 00:10:57.294 02:30:30 -- common/autotest_common.sh@940 -- # kill -0 4193321 00:10:57.294 02:30:30 -- common/autotest_common.sh@941 -- # uname 00:10:57.294 02:30:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:57.294 02:30:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4193321 00:10:57.294 02:30:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:57.294 02:30:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:57.294 02:30:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4193321' 00:10:57.294 killing process with pid 4193321 00:10:57.294 02:30:30 -- common/autotest_common.sh@955 -- # kill 4193321 00:10:57.294 02:30:30 -- common/autotest_common.sh@960 -- # wait 4193321 00:10:57.560 02:30:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:57.560 02:30:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:57.560 02:30:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:57.560 02:30:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.560 02:30:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:57.560 02:30:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.560 02:30:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.560 02:30:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.485 02:30:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.485 00:10:59.485 real 0m42.471s 00:10:59.485 user 2m35.115s 00:10:59.485 sys 0m12.502s 00:10:59.485 02:30:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.485 02:30:33 -- common/autotest_common.sh@10 -- # set +x 00:10:59.485 ************************************ 00:10:59.485 END TEST nvmf_ns_hotplug_stress 00:10:59.485 ************************************ 00:10:59.770 02:30:33 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:59.771 02:30:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:59.771 02:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.771 02:30:33 -- common/autotest_common.sh@10 -- # set +x 00:10:59.771 ************************************ 00:10:59.771 START TEST nvmf_connect_stress 00:10:59.771 ************************************ 00:10:59.771 02:30:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.048 * Looking for test storage... 00:11:00.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.048 02:30:33 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.048 02:30:33 -- nvmf/common.sh@7 -- # uname -s 00:11:00.048 02:30:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.048 02:30:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.048 02:30:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.048 02:30:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.048 02:30:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.048 02:30:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.048 02:30:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.048 02:30:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.048 02:30:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.048 02:30:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.048 02:30:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.048 02:30:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.048 02:30:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.048 02:30:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.048 02:30:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.048 02:30:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.048 02:30:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.048 02:30:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.048 02:30:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.048 02:30:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.049 02:30:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.049 02:30:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.049 02:30:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.049 02:30:33 -- paths/export.sh@5 -- # export PATH 00:11:00.049 02:30:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.049 02:30:33 -- nvmf/common.sh@47 -- # : 0 00:11:00.049 02:30:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.049 02:30:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.049 02:30:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.049 02:30:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.049 02:30:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.049 02:30:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.049 02:30:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.049 02:30:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.049 02:30:33 -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:00.049 02:30:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:00.049 02:30:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.049 02:30:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:00.049 02:30:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:00.049 02:30:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:00.049 02:30:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.049 02:30:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.049 02:30:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.049 02:30:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:00.049 02:30:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:00.049 02:30:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.049 02:30:33 -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 02:30:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:06.654 02:30:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.654 02:30:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.654 02:30:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.654 02:30:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.654 02:30:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.654 02:30:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.654 02:30:39 -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.654 02:30:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.654 02:30:39 -- nvmf/common.sh@296 -- # e810=() 00:11:06.654 02:30:39 -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.654 02:30:39 -- nvmf/common.sh@297 -- # x722=() 00:11:06.654 02:30:39 -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.654 02:30:39 -- nvmf/common.sh@298 -- # mlx=() 00:11:06.654 02:30:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.654 02:30:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.654 02:30:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.654 02:30:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.654 02:30:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.654 02:30:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.654 02:30:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.654 02:30:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.654 02:30:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.654 02:30:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:06.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:06.654 02:30:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.654 02:30:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:06.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:06.654 02:30:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.654 02:30:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.654 02:30:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.654 02:30:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:06.654 02:30:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.654 02:30:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:06.654 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:06.654 02:30:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.654 02:30:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.654 02:30:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.654 02:30:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:06.654 02:30:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.654 02:30:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:06.654 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:06.654 02:30:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.654 02:30:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:06.654 02:30:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:06.654 02:30:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:06.654 02:30:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:06.654 02:30:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.654 02:30:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.654 02:30:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.654 02:30:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.654 02:30:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.654 02:30:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.654 02:30:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.654 02:30:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.654 02:30:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.654 02:30:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.654 02:30:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.654 02:30:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.654 02:30:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.654 02:30:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.654 02:30:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.654 02:30:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.654 02:30:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.654 02:30:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.916 02:30:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.916 02:30:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:11:06.916 00:11:06.916 --- 10.0.0.2 ping statistics --- 00:11:06.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.916 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:11:06.916 02:30:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:11:06.916 00:11:06.916 --- 10.0.0.1 ping statistics --- 00:11:06.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.916 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:11:06.916 02:30:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.916 02:30:40 -- nvmf/common.sh@411 -- # return 0 00:11:06.916 02:30:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:06.916 02:30:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.916 02:30:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:06.916 02:30:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:06.916 02:30:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.916 02:30:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:06.916 02:30:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:06.916 02:30:40 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:06.916 02:30:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:06.916 02:30:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:06.916 02:30:40 -- common/autotest_common.sh@10 -- # set +x 00:11:06.916 02:30:40 -- nvmf/common.sh@470 -- # nvmfpid=11523 00:11:06.916 02:30:40 -- nvmf/common.sh@471 -- # waitforlisten 11523 00:11:06.916 02:30:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:06.916 02:30:40 -- common/autotest_common.sh@817 -- # '[' -z 11523 ']' 00:11:06.916 02:30:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.916 02:30:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:06.916 02:30:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.916 02:30:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:06.916 02:30:40 -- common/autotest_common.sh@10 -- # set +x 00:11:06.916 [2024-04-27 02:30:40.421700] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:06.916 [2024-04-27 02:30:40.421762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.916 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.916 [2024-04-27 02:30:40.494056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.177 [2024-04-27 02:30:40.569241] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.177 [2024-04-27 02:30:40.569289] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.177 [2024-04-27 02:30:40.569297] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.177 [2024-04-27 02:30:40.569303] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.177 [2024-04-27 02:30:40.569309] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.177 [2024-04-27 02:30:40.569476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.177 [2024-04-27 02:30:40.569661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.177 [2024-04-27 02:30:40.569665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.749 02:30:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:07.749 02:30:41 -- common/autotest_common.sh@850 -- # return 0 00:11:07.749 02:30:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:07.749 02:30:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:07.749 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 02:30:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.749 02:30:41 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.749 02:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.749 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 [2024-04-27 02:30:41.254657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.749 02:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.749 02:30:41 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:07.749 02:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.749 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 02:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.749 02:30:41 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.749 02:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.749 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 [2024-04-27 02:30:41.279086] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.749 02:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.749 02:30:41 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:07.749 02:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.749 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 NULL1 00:11:07.749 02:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.749 02:30:41 -- target/connect_stress.sh@21 -- # PERF_PID=11634 00:11:07.749 02:30:41 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:07.749 02:30:41 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:07.749 02:30:41 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:07.749 02:30:41 -- target/connect_stress.sh@27 -- # seq 1 20 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:07.750 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:07.750 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.012 02:30:41 -- target/connect_stress.sh@28 -- # cat 00:11:08.012 02:30:41 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:08.012 02:30:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.012 02:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.012 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.273 02:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.273 02:30:41 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:08.273 02:30:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.273 02:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.273 02:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.534 02:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.534 02:30:42 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:08.534 02:30:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.534 02:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.534 02:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:08.796 02:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.796 02:30:42 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:08.796 02:30:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.796 02:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.796 02:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:09.368 02:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.368 02:30:42 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:09.368 02:30:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.368 02:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.368 02:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 02:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.628 02:30:43 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:09.628 02:30:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.628 02:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.628 02:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:09.888 02:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.888 02:30:43 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:09.888 02:30:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.888 02:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.888 02:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.149 02:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.149 02:30:43 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:10.149 02:30:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.149 02:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.149 02:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:10.409 02:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.409 02:30:44 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:10.409 02:30:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.409 02:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.409 02:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:10.982 02:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.982 02:30:44 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:10.982 02:30:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.982 02:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.982 02:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:11.242 02:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.242 02:30:44 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:11.242 02:30:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.242 02:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.243 02:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:11.503 02:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.503 02:30:44 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:11.503 02:30:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.503 02:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.503 02:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:11.765 02:30:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.765 02:30:45 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:11.765 02:30:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.765 02:30:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.765 02:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.026 02:30:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.026 02:30:45 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:12.026 02:30:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.026 02:30:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.026 02:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.599 02:30:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.599 02:30:45 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:12.599 02:30:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.599 02:30:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.599 02:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.860 02:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.860 02:30:46 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:12.860 02:30:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.860 02:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.860 02:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:13.121 02:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.121 02:30:46 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:13.121 02:30:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.121 02:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.121 02:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:13.381 02:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.381 02:30:46 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:13.381 02:30:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.381 02:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.381 02:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:13.642 02:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.642 02:30:47 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:13.642 02:30:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.642 02:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.642 02:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.214 02:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.214 02:30:47 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:14.214 02:30:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.214 02:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.214 02:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.475 02:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.475 02:30:47 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:14.475 02:30:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.475 02:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.475 02:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.735 02:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.735 02:30:48 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:14.735 02:30:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.735 02:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.735 02:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.996 02:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.996 02:30:48 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:14.996 02:30:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.996 02:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.996 02:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:15.568 02:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.568 02:30:48 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:15.568 02:30:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.568 02:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.568 02:30:48 -- common/autotest_common.sh@10 -- # set +x 00:11:15.828 02:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.828 02:30:49 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:15.828 02:30:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.828 02:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.828 02:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:16.089 02:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.089 02:30:49 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:16.089 02:30:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.089 02:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.089 02:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 02:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.349 02:30:49 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:16.349 02:30:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.349 02:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.349 02:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:16.633 02:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.633 02:30:50 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:16.633 02:30:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.633 02:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.633 02:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:17.205 02:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.205 02:30:50 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:17.205 02:30:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.205 02:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.205 02:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:17.468 02:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.468 02:30:50 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:17.468 02:30:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.468 02:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.468 02:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:17.729 02:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.729 02:30:51 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:17.729 02:30:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.729 02:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.729 02:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:17.989 02:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.989 02:30:51 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:17.989 02:30:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.989 02:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.989 02:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:17.989 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.250 02:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.250 02:30:51 -- target/connect_stress.sh@34 -- # kill -0 11634 00:11:18.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (11634) - No such process 00:11:18.250 02:30:51 -- target/connect_stress.sh@38 -- # wait 11634 00:11:18.250 02:30:51 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:18.250 02:30:51 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:18.250 02:30:51 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:18.250 02:30:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:18.250 02:30:51 -- nvmf/common.sh@117 -- # sync 00:11:18.250 02:30:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.250 02:30:51 -- nvmf/common.sh@120 -- # set +e 00:11:18.250 02:30:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.250 02:30:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.250 rmmod nvme_tcp 00:11:18.250 rmmod nvme_fabrics 00:11:18.511 rmmod nvme_keyring 00:11:18.511 02:30:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.511 02:30:51 -- nvmf/common.sh@124 -- # set -e 00:11:18.511 02:30:51 -- nvmf/common.sh@125 -- # return 0 00:11:18.511 02:30:51 -- nvmf/common.sh@478 -- # '[' -n 11523 ']' 00:11:18.511 02:30:51 -- nvmf/common.sh@479 -- # killprocess 11523 00:11:18.511 02:30:51 -- common/autotest_common.sh@936 -- # '[' -z 11523 ']' 00:11:18.511 02:30:51 -- common/autotest_common.sh@940 -- # kill -0 11523 00:11:18.511 02:30:51 -- common/autotest_common.sh@941 -- # uname 00:11:18.511 02:30:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:18.511 02:30:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 11523 00:11:18.511 02:30:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:18.511 02:30:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:18.511 02:30:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 11523' 00:11:18.511 killing process with pid 11523 00:11:18.511 02:30:51 -- common/autotest_common.sh@955 -- # kill 11523 00:11:18.511 02:30:51 -- common/autotest_common.sh@960 -- # wait 11523 00:11:18.511 02:30:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:18.511 02:30:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:18.511 02:30:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:18.511 02:30:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.511 02:30:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.511 02:30:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.511 02:30:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.511 02:30:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.063 02:30:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.063 00:11:21.063 real 0m20.869s 00:11:21.063 user 0m42.999s 00:11:21.063 sys 0m8.577s 00:11:21.063 02:30:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.063 02:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:21.063 ************************************ 00:11:21.063 END TEST nvmf_connect_stress 00:11:21.063 ************************************ 00:11:21.063 02:30:54 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:21.063 02:30:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.064 02:30:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.064 02:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:21.064 ************************************ 00:11:21.064 START TEST nvmf_fused_ordering 00:11:21.064 ************************************ 00:11:21.064 02:30:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:21.064 * Looking for test storage... 00:11:21.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.064 02:30:54 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.064 02:30:54 -- nvmf/common.sh@7 -- # uname -s 00:11:21.064 02:30:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.064 02:30:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.064 02:30:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.064 02:30:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.064 02:30:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.064 02:30:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.064 02:30:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.064 02:30:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.064 02:30:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.064 02:30:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.064 02:30:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.064 02:30:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.064 02:30:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.064 02:30:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.064 02:30:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.064 02:30:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.064 02:30:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.064 02:30:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.064 02:30:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.064 02:30:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.064 02:30:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.064 02:30:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.064 02:30:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.064 02:30:54 -- paths/export.sh@5 -- # export PATH 00:11:21.064 02:30:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.064 02:30:54 -- nvmf/common.sh@47 -- # : 0 00:11:21.064 02:30:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.064 02:30:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.064 02:30:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.064 02:30:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.064 02:30:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.064 02:30:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.064 02:30:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.064 02:30:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.064 02:30:54 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:21.064 02:30:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:21.064 02:30:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.064 02:30:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:21.064 02:30:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:21.064 02:30:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:21.064 02:30:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.064 02:30:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.064 02:30:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.064 02:30:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:21.064 02:30:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:21.064 02:30:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.064 02:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:27.655 02:31:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:27.655 02:31:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.655 02:31:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.655 02:31:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.655 02:31:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.655 02:31:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.655 02:31:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.655 02:31:01 -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.655 02:31:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.655 02:31:01 -- nvmf/common.sh@296 -- # e810=() 00:11:27.655 02:31:01 -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.655 02:31:01 -- nvmf/common.sh@297 -- # x722=() 00:11:27.655 02:31:01 -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.655 02:31:01 -- nvmf/common.sh@298 -- # mlx=() 00:11:27.655 02:31:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.655 02:31:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.655 02:31:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.655 02:31:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.655 02:31:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.655 02:31:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.655 02:31:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:27.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:27.655 02:31:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.655 02:31:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:27.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:27.655 02:31:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.655 02:31:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.655 02:31:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.655 02:31:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:27.655 02:31:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.655 02:31:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:27.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:27.655 02:31:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.655 02:31:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.655 02:31:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.655 02:31:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:27.655 02:31:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.655 02:31:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:27.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:27.655 02:31:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.655 02:31:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:27.655 02:31:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:27.655 02:31:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:27.655 02:31:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:27.655 02:31:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.655 02:31:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.655 02:31:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.655 02:31:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.655 02:31:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.655 02:31:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.655 02:31:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.655 02:31:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.655 02:31:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.655 02:31:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.655 02:31:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.655 02:31:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.655 02:31:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.655 02:31:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.655 02:31:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.655 02:31:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.655 02:31:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.915 02:31:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.915 02:31:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.915 02:31:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:11:27.915 00:11:27.915 --- 10.0.0.2 ping statistics --- 00:11:27.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.915 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:11:27.916 02:31:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.409 ms 00:11:27.916 00:11:27.916 --- 10.0.0.1 ping statistics --- 00:11:27.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.916 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:11:27.916 02:31:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.916 02:31:01 -- nvmf/common.sh@411 -- # return 0 00:11:27.916 02:31:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:27.916 02:31:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.916 02:31:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:27.916 02:31:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:27.916 02:31:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.916 02:31:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:27.916 02:31:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:27.916 02:31:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:27.916 02:31:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:27.916 02:31:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:27.916 02:31:01 -- common/autotest_common.sh@10 -- # set +x 00:11:27.916 02:31:01 -- nvmf/common.sh@470 -- # nvmfpid=17985 00:11:27.916 02:31:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:27.916 02:31:01 -- nvmf/common.sh@471 -- # waitforlisten 17985 00:11:27.916 02:31:01 -- common/autotest_common.sh@817 -- # '[' -z 17985 ']' 00:11:27.916 02:31:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.916 02:31:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:27.916 02:31:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.916 02:31:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:27.916 02:31:01 -- common/autotest_common.sh@10 -- # set +x 00:11:27.916 [2024-04-27 02:31:01.435639] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:27.916 [2024-04-27 02:31:01.435690] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.916 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.916 [2024-04-27 02:31:01.499012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.176 [2024-04-27 02:31:01.561300] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.176 [2024-04-27 02:31:01.561333] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.176 [2024-04-27 02:31:01.561340] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.176 [2024-04-27 02:31:01.561347] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.176 [2024-04-27 02:31:01.561352] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.176 [2024-04-27 02:31:01.561370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.747 02:31:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.747 02:31:02 -- common/autotest_common.sh@850 -- # return 0 00:11:28.747 02:31:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:28.747 02:31:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 02:31:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.747 02:31:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.747 02:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 [2024-04-27 02:31:02.251668] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.747 02:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.747 02:31:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.747 02:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 02:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.747 02:31:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.747 02:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 [2024-04-27 02:31:02.275852] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.747 02:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.747 02:31:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.747 02:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 NULL1 00:11:28.747 02:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.747 02:31:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:28.747 02:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 02:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.747 02:31:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:28.747 02:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.747 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:28.747 02:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.747 02:31:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:28.747 [2024-04-27 02:31:02.338554] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:28.747 [2024-04-27 02:31:02.338596] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18023 ] 00:11:28.747 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.319 Attached to nqn.2016-06.io.spdk:cnode1 00:11:29.319 Namespace ID: 1 size: 1GB 00:11:29.319 fused_ordering(0) 00:11:29.319 fused_ordering(1) 00:11:29.319 fused_ordering(2) 00:11:29.319 fused_ordering(3) 00:11:29.319 fused_ordering(4) 00:11:29.319 fused_ordering(5) 00:11:29.319 fused_ordering(6) 00:11:29.319 fused_ordering(7) 00:11:29.319 fused_ordering(8) 00:11:29.319 fused_ordering(9) 00:11:29.319 fused_ordering(10) 00:11:29.319 fused_ordering(11) 00:11:29.319 fused_ordering(12) 00:11:29.319 fused_ordering(13) 00:11:29.319 fused_ordering(14) 00:11:29.319 fused_ordering(15) 00:11:29.319 fused_ordering(16) 00:11:29.319 fused_ordering(17) 00:11:29.319 fused_ordering(18) 00:11:29.319 fused_ordering(19) 00:11:29.319 fused_ordering(20) 00:11:29.319 fused_ordering(21) 00:11:29.319 fused_ordering(22) 00:11:29.319 fused_ordering(23) 00:11:29.319 fused_ordering(24) 00:11:29.319 fused_ordering(25) 00:11:29.319 fused_ordering(26) 00:11:29.319 fused_ordering(27) 00:11:29.319 fused_ordering(28) 00:11:29.319 fused_ordering(29) 00:11:29.319 fused_ordering(30) 00:11:29.319 fused_ordering(31) 00:11:29.319 fused_ordering(32) 00:11:29.319 fused_ordering(33) 00:11:29.319 fused_ordering(34) 00:11:29.319 fused_ordering(35) 00:11:29.319 fused_ordering(36) 00:11:29.319 fused_ordering(37) 00:11:29.319 fused_ordering(38) 00:11:29.319 fused_ordering(39) 00:11:29.319 fused_ordering(40) 00:11:29.319 fused_ordering(41) 00:11:29.319 fused_ordering(42) 00:11:29.319 fused_ordering(43) 00:11:29.319 fused_ordering(44) 00:11:29.319 fused_ordering(45) 00:11:29.319 fused_ordering(46) 00:11:29.319 fused_ordering(47) 00:11:29.319 fused_ordering(48) 00:11:29.319 fused_ordering(49) 00:11:29.319 fused_ordering(50) 00:11:29.319 fused_ordering(51) 00:11:29.319 fused_ordering(52) 00:11:29.319 fused_ordering(53) 00:11:29.319 fused_ordering(54) 00:11:29.319 fused_ordering(55) 00:11:29.319 fused_ordering(56) 00:11:29.319 fused_ordering(57) 00:11:29.319 fused_ordering(58) 00:11:29.319 fused_ordering(59) 00:11:29.319 fused_ordering(60) 00:11:29.319 fused_ordering(61) 00:11:29.319 fused_ordering(62) 00:11:29.319 fused_ordering(63) 00:11:29.319 fused_ordering(64) 00:11:29.319 fused_ordering(65) 00:11:29.319 fused_ordering(66) 00:11:29.319 fused_ordering(67) 00:11:29.319 fused_ordering(68) 00:11:29.319 fused_ordering(69) 00:11:29.319 fused_ordering(70) 00:11:29.319 fused_ordering(71) 00:11:29.319 fused_ordering(72) 00:11:29.319 fused_ordering(73) 00:11:29.319 fused_ordering(74) 00:11:29.319 fused_ordering(75) 00:11:29.319 fused_ordering(76) 00:11:29.319 fused_ordering(77) 00:11:29.319 fused_ordering(78) 00:11:29.319 fused_ordering(79) 00:11:29.319 fused_ordering(80) 00:11:29.319 fused_ordering(81) 00:11:29.319 fused_ordering(82) 00:11:29.319 fused_ordering(83) 00:11:29.319 fused_ordering(84) 00:11:29.319 fused_ordering(85) 00:11:29.319 fused_ordering(86) 00:11:29.319 fused_ordering(87) 00:11:29.319 fused_ordering(88) 00:11:29.319 fused_ordering(89) 00:11:29.319 fused_ordering(90) 00:11:29.319 fused_ordering(91) 00:11:29.319 fused_ordering(92) 00:11:29.319 fused_ordering(93) 00:11:29.319 fused_ordering(94) 00:11:29.319 fused_ordering(95) 00:11:29.319 fused_ordering(96) 00:11:29.319 fused_ordering(97) 00:11:29.319 fused_ordering(98) 00:11:29.319 fused_ordering(99) 00:11:29.319 fused_ordering(100) 00:11:29.319 fused_ordering(101) 00:11:29.319 fused_ordering(102) 00:11:29.319 fused_ordering(103) 00:11:29.319 fused_ordering(104) 00:11:29.319 fused_ordering(105) 00:11:29.319 fused_ordering(106) 00:11:29.319 fused_ordering(107) 00:11:29.319 fused_ordering(108) 00:11:29.319 fused_ordering(109) 00:11:29.319 fused_ordering(110) 00:11:29.319 fused_ordering(111) 00:11:29.319 fused_ordering(112) 00:11:29.319 fused_ordering(113) 00:11:29.319 fused_ordering(114) 00:11:29.319 fused_ordering(115) 00:11:29.319 fused_ordering(116) 00:11:29.319 fused_ordering(117) 00:11:29.319 fused_ordering(118) 00:11:29.319 fused_ordering(119) 00:11:29.319 fused_ordering(120) 00:11:29.319 fused_ordering(121) 00:11:29.319 fused_ordering(122) 00:11:29.319 fused_ordering(123) 00:11:29.319 fused_ordering(124) 00:11:29.319 fused_ordering(125) 00:11:29.319 fused_ordering(126) 00:11:29.319 fused_ordering(127) 00:11:29.319 fused_ordering(128) 00:11:29.319 fused_ordering(129) 00:11:29.319 fused_ordering(130) 00:11:29.319 fused_ordering(131) 00:11:29.319 fused_ordering(132) 00:11:29.319 fused_ordering(133) 00:11:29.319 fused_ordering(134) 00:11:29.319 fused_ordering(135) 00:11:29.319 fused_ordering(136) 00:11:29.319 fused_ordering(137) 00:11:29.319 fused_ordering(138) 00:11:29.319 fused_ordering(139) 00:11:29.319 fused_ordering(140) 00:11:29.319 fused_ordering(141) 00:11:29.319 fused_ordering(142) 00:11:29.319 fused_ordering(143) 00:11:29.319 fused_ordering(144) 00:11:29.319 fused_ordering(145) 00:11:29.319 fused_ordering(146) 00:11:29.319 fused_ordering(147) 00:11:29.319 fused_ordering(148) 00:11:29.319 fused_ordering(149) 00:11:29.319 fused_ordering(150) 00:11:29.319 fused_ordering(151) 00:11:29.319 fused_ordering(152) 00:11:29.319 fused_ordering(153) 00:11:29.319 fused_ordering(154) 00:11:29.319 fused_ordering(155) 00:11:29.319 fused_ordering(156) 00:11:29.319 fused_ordering(157) 00:11:29.319 fused_ordering(158) 00:11:29.319 fused_ordering(159) 00:11:29.319 fused_ordering(160) 00:11:29.319 fused_ordering(161) 00:11:29.319 fused_ordering(162) 00:11:29.319 fused_ordering(163) 00:11:29.319 fused_ordering(164) 00:11:29.319 fused_ordering(165) 00:11:29.319 fused_ordering(166) 00:11:29.319 fused_ordering(167) 00:11:29.319 fused_ordering(168) 00:11:29.319 fused_ordering(169) 00:11:29.319 fused_ordering(170) 00:11:29.319 fused_ordering(171) 00:11:29.319 fused_ordering(172) 00:11:29.319 fused_ordering(173) 00:11:29.319 fused_ordering(174) 00:11:29.319 fused_ordering(175) 00:11:29.319 fused_ordering(176) 00:11:29.319 fused_ordering(177) 00:11:29.319 fused_ordering(178) 00:11:29.319 fused_ordering(179) 00:11:29.319 fused_ordering(180) 00:11:29.319 fused_ordering(181) 00:11:29.319 fused_ordering(182) 00:11:29.319 fused_ordering(183) 00:11:29.319 fused_ordering(184) 00:11:29.319 fused_ordering(185) 00:11:29.319 fused_ordering(186) 00:11:29.319 fused_ordering(187) 00:11:29.319 fused_ordering(188) 00:11:29.319 fused_ordering(189) 00:11:29.319 fused_ordering(190) 00:11:29.319 fused_ordering(191) 00:11:29.319 fused_ordering(192) 00:11:29.319 fused_ordering(193) 00:11:29.319 fused_ordering(194) 00:11:29.319 fused_ordering(195) 00:11:29.319 fused_ordering(196) 00:11:29.319 fused_ordering(197) 00:11:29.319 fused_ordering(198) 00:11:29.319 fused_ordering(199) 00:11:29.319 fused_ordering(200) 00:11:29.319 fused_ordering(201) 00:11:29.319 fused_ordering(202) 00:11:29.319 fused_ordering(203) 00:11:29.319 fused_ordering(204) 00:11:29.319 fused_ordering(205) 00:11:29.888 fused_ordering(206) 00:11:29.888 fused_ordering(207) 00:11:29.888 fused_ordering(208) 00:11:29.888 fused_ordering(209) 00:11:29.888 fused_ordering(210) 00:11:29.888 fused_ordering(211) 00:11:29.888 fused_ordering(212) 00:11:29.888 fused_ordering(213) 00:11:29.888 fused_ordering(214) 00:11:29.888 fused_ordering(215) 00:11:29.888 fused_ordering(216) 00:11:29.888 fused_ordering(217) 00:11:29.888 fused_ordering(218) 00:11:29.888 fused_ordering(219) 00:11:29.888 fused_ordering(220) 00:11:29.888 fused_ordering(221) 00:11:29.888 fused_ordering(222) 00:11:29.888 fused_ordering(223) 00:11:29.888 fused_ordering(224) 00:11:29.888 fused_ordering(225) 00:11:29.888 fused_ordering(226) 00:11:29.888 fused_ordering(227) 00:11:29.888 fused_ordering(228) 00:11:29.888 fused_ordering(229) 00:11:29.888 fused_ordering(230) 00:11:29.888 fused_ordering(231) 00:11:29.888 fused_ordering(232) 00:11:29.888 fused_ordering(233) 00:11:29.888 fused_ordering(234) 00:11:29.888 fused_ordering(235) 00:11:29.888 fused_ordering(236) 00:11:29.888 fused_ordering(237) 00:11:29.888 fused_ordering(238) 00:11:29.888 fused_ordering(239) 00:11:29.888 fused_ordering(240) 00:11:29.888 fused_ordering(241) 00:11:29.889 fused_ordering(242) 00:11:29.889 fused_ordering(243) 00:11:29.889 fused_ordering(244) 00:11:29.889 fused_ordering(245) 00:11:29.889 fused_ordering(246) 00:11:29.889 fused_ordering(247) 00:11:29.889 fused_ordering(248) 00:11:29.889 fused_ordering(249) 00:11:29.889 fused_ordering(250) 00:11:29.889 fused_ordering(251) 00:11:29.889 fused_ordering(252) 00:11:29.889 fused_ordering(253) 00:11:29.889 fused_ordering(254) 00:11:29.889 fused_ordering(255) 00:11:29.889 fused_ordering(256) 00:11:29.889 fused_ordering(257) 00:11:29.889 fused_ordering(258) 00:11:29.889 fused_ordering(259) 00:11:29.889 fused_ordering(260) 00:11:29.889 fused_ordering(261) 00:11:29.889 fused_ordering(262) 00:11:29.889 fused_ordering(263) 00:11:29.889 fused_ordering(264) 00:11:29.889 fused_ordering(265) 00:11:29.889 fused_ordering(266) 00:11:29.889 fused_ordering(267) 00:11:29.889 fused_ordering(268) 00:11:29.889 fused_ordering(269) 00:11:29.889 fused_ordering(270) 00:11:29.889 fused_ordering(271) 00:11:29.889 fused_ordering(272) 00:11:29.889 fused_ordering(273) 00:11:29.889 fused_ordering(274) 00:11:29.889 fused_ordering(275) 00:11:29.889 fused_ordering(276) 00:11:29.889 fused_ordering(277) 00:11:29.889 fused_ordering(278) 00:11:29.889 fused_ordering(279) 00:11:29.889 fused_ordering(280) 00:11:29.889 fused_ordering(281) 00:11:29.889 fused_ordering(282) 00:11:29.889 fused_ordering(283) 00:11:29.889 fused_ordering(284) 00:11:29.889 fused_ordering(285) 00:11:29.889 fused_ordering(286) 00:11:29.889 fused_ordering(287) 00:11:29.889 fused_ordering(288) 00:11:29.889 fused_ordering(289) 00:11:29.889 fused_ordering(290) 00:11:29.889 fused_ordering(291) 00:11:29.889 fused_ordering(292) 00:11:29.889 fused_ordering(293) 00:11:29.889 fused_ordering(294) 00:11:29.889 fused_ordering(295) 00:11:29.889 fused_ordering(296) 00:11:29.889 fused_ordering(297) 00:11:29.889 fused_ordering(298) 00:11:29.889 fused_ordering(299) 00:11:29.889 fused_ordering(300) 00:11:29.889 fused_ordering(301) 00:11:29.889 fused_ordering(302) 00:11:29.889 fused_ordering(303) 00:11:29.889 fused_ordering(304) 00:11:29.889 fused_ordering(305) 00:11:29.889 fused_ordering(306) 00:11:29.889 fused_ordering(307) 00:11:29.889 fused_ordering(308) 00:11:29.889 fused_ordering(309) 00:11:29.889 fused_ordering(310) 00:11:29.889 fused_ordering(311) 00:11:29.889 fused_ordering(312) 00:11:29.889 fused_ordering(313) 00:11:29.889 fused_ordering(314) 00:11:29.889 fused_ordering(315) 00:11:29.889 fused_ordering(316) 00:11:29.889 fused_ordering(317) 00:11:29.889 fused_ordering(318) 00:11:29.889 fused_ordering(319) 00:11:29.889 fused_ordering(320) 00:11:29.889 fused_ordering(321) 00:11:29.889 fused_ordering(322) 00:11:29.889 fused_ordering(323) 00:11:29.889 fused_ordering(324) 00:11:29.889 fused_ordering(325) 00:11:29.889 fused_ordering(326) 00:11:29.889 fused_ordering(327) 00:11:29.889 fused_ordering(328) 00:11:29.889 fused_ordering(329) 00:11:29.889 fused_ordering(330) 00:11:29.889 fused_ordering(331) 00:11:29.889 fused_ordering(332) 00:11:29.889 fused_ordering(333) 00:11:29.889 fused_ordering(334) 00:11:29.889 fused_ordering(335) 00:11:29.889 fused_ordering(336) 00:11:29.889 fused_ordering(337) 00:11:29.889 fused_ordering(338) 00:11:29.889 fused_ordering(339) 00:11:29.889 fused_ordering(340) 00:11:29.889 fused_ordering(341) 00:11:29.889 fused_ordering(342) 00:11:29.889 fused_ordering(343) 00:11:29.889 fused_ordering(344) 00:11:29.889 fused_ordering(345) 00:11:29.889 fused_ordering(346) 00:11:29.889 fused_ordering(347) 00:11:29.889 fused_ordering(348) 00:11:29.889 fused_ordering(349) 00:11:29.889 fused_ordering(350) 00:11:29.889 fused_ordering(351) 00:11:29.889 fused_ordering(352) 00:11:29.889 fused_ordering(353) 00:11:29.889 fused_ordering(354) 00:11:29.889 fused_ordering(355) 00:11:29.889 fused_ordering(356) 00:11:29.889 fused_ordering(357) 00:11:29.889 fused_ordering(358) 00:11:29.889 fused_ordering(359) 00:11:29.889 fused_ordering(360) 00:11:29.889 fused_ordering(361) 00:11:29.889 fused_ordering(362) 00:11:29.889 fused_ordering(363) 00:11:29.889 fused_ordering(364) 00:11:29.889 fused_ordering(365) 00:11:29.889 fused_ordering(366) 00:11:29.889 fused_ordering(367) 00:11:29.889 fused_ordering(368) 00:11:29.889 fused_ordering(369) 00:11:29.889 fused_ordering(370) 00:11:29.889 fused_ordering(371) 00:11:29.889 fused_ordering(372) 00:11:29.889 fused_ordering(373) 00:11:29.889 fused_ordering(374) 00:11:29.889 fused_ordering(375) 00:11:29.889 fused_ordering(376) 00:11:29.889 fused_ordering(377) 00:11:29.889 fused_ordering(378) 00:11:29.889 fused_ordering(379) 00:11:29.889 fused_ordering(380) 00:11:29.889 fused_ordering(381) 00:11:29.889 fused_ordering(382) 00:11:29.889 fused_ordering(383) 00:11:29.889 fused_ordering(384) 00:11:29.889 fused_ordering(385) 00:11:29.889 fused_ordering(386) 00:11:29.889 fused_ordering(387) 00:11:29.889 fused_ordering(388) 00:11:29.889 fused_ordering(389) 00:11:29.889 fused_ordering(390) 00:11:29.889 fused_ordering(391) 00:11:29.889 fused_ordering(392) 00:11:29.889 fused_ordering(393) 00:11:29.889 fused_ordering(394) 00:11:29.889 fused_ordering(395) 00:11:29.889 fused_ordering(396) 00:11:29.889 fused_ordering(397) 00:11:29.889 fused_ordering(398) 00:11:29.889 fused_ordering(399) 00:11:29.889 fused_ordering(400) 00:11:29.889 fused_ordering(401) 00:11:29.889 fused_ordering(402) 00:11:29.889 fused_ordering(403) 00:11:29.889 fused_ordering(404) 00:11:29.889 fused_ordering(405) 00:11:29.889 fused_ordering(406) 00:11:29.889 fused_ordering(407) 00:11:29.889 fused_ordering(408) 00:11:29.889 fused_ordering(409) 00:11:29.889 fused_ordering(410) 00:11:30.867 fused_ordering(411) 00:11:30.867 fused_ordering(412) 00:11:30.867 fused_ordering(413) 00:11:30.867 fused_ordering(414) 00:11:30.867 fused_ordering(415) 00:11:30.867 fused_ordering(416) 00:11:30.867 fused_ordering(417) 00:11:30.867 fused_ordering(418) 00:11:30.867 fused_ordering(419) 00:11:30.867 fused_ordering(420) 00:11:30.867 fused_ordering(421) 00:11:30.867 fused_ordering(422) 00:11:30.867 fused_ordering(423) 00:11:30.867 fused_ordering(424) 00:11:30.867 fused_ordering(425) 00:11:30.867 fused_ordering(426) 00:11:30.867 fused_ordering(427) 00:11:30.867 fused_ordering(428) 00:11:30.867 fused_ordering(429) 00:11:30.867 fused_ordering(430) 00:11:30.867 fused_ordering(431) 00:11:30.867 fused_ordering(432) 00:11:30.867 fused_ordering(433) 00:11:30.867 fused_ordering(434) 00:11:30.867 fused_ordering(435) 00:11:30.867 fused_ordering(436) 00:11:30.867 fused_ordering(437) 00:11:30.868 fused_ordering(438) 00:11:30.868 fused_ordering(439) 00:11:30.868 fused_ordering(440) 00:11:30.868 fused_ordering(441) 00:11:30.868 fused_ordering(442) 00:11:30.868 fused_ordering(443) 00:11:30.868 fused_ordering(444) 00:11:30.868 fused_ordering(445) 00:11:30.868 fused_ordering(446) 00:11:30.868 fused_ordering(447) 00:11:30.868 fused_ordering(448) 00:11:30.868 fused_ordering(449) 00:11:30.868 fused_ordering(450) 00:11:30.868 fused_ordering(451) 00:11:30.868 fused_ordering(452) 00:11:30.868 fused_ordering(453) 00:11:30.868 fused_ordering(454) 00:11:30.868 fused_ordering(455) 00:11:30.868 fused_ordering(456) 00:11:30.868 fused_ordering(457) 00:11:30.868 fused_ordering(458) 00:11:30.868 fused_ordering(459) 00:11:30.868 fused_ordering(460) 00:11:30.868 fused_ordering(461) 00:11:30.868 fused_ordering(462) 00:11:30.868 fused_ordering(463) 00:11:30.868 fused_ordering(464) 00:11:30.868 fused_ordering(465) 00:11:30.868 fused_ordering(466) 00:11:30.868 fused_ordering(467) 00:11:30.868 fused_ordering(468) 00:11:30.868 fused_ordering(469) 00:11:30.868 fused_ordering(470) 00:11:30.868 fused_ordering(471) 00:11:30.868 fused_ordering(472) 00:11:30.868 fused_ordering(473) 00:11:30.868 fused_ordering(474) 00:11:30.868 fused_ordering(475) 00:11:30.868 fused_ordering(476) 00:11:30.868 fused_ordering(477) 00:11:30.868 fused_ordering(478) 00:11:30.868 fused_ordering(479) 00:11:30.868 fused_ordering(480) 00:11:30.868 fused_ordering(481) 00:11:30.868 fused_ordering(482) 00:11:30.868 fused_ordering(483) 00:11:30.868 fused_ordering(484) 00:11:30.868 fused_ordering(485) 00:11:30.868 fused_ordering(486) 00:11:30.868 fused_ordering(487) 00:11:30.868 fused_ordering(488) 00:11:30.868 fused_ordering(489) 00:11:30.868 fused_ordering(490) 00:11:30.868 fused_ordering(491) 00:11:30.868 fused_ordering(492) 00:11:30.868 fused_ordering(493) 00:11:30.868 fused_ordering(494) 00:11:30.868 fused_ordering(495) 00:11:30.868 fused_ordering(496) 00:11:30.868 fused_ordering(497) 00:11:30.868 fused_ordering(498) 00:11:30.868 fused_ordering(499) 00:11:30.868 fused_ordering(500) 00:11:30.868 fused_ordering(501) 00:11:30.868 fused_ordering(502) 00:11:30.868 fused_ordering(503) 00:11:30.868 fused_ordering(504) 00:11:30.868 fused_ordering(505) 00:11:30.868 fused_ordering(506) 00:11:30.868 fused_ordering(507) 00:11:30.868 fused_ordering(508) 00:11:30.868 fused_ordering(509) 00:11:30.868 fused_ordering(510) 00:11:30.868 fused_ordering(511) 00:11:30.868 fused_ordering(512) 00:11:30.868 fused_ordering(513) 00:11:30.868 fused_ordering(514) 00:11:30.868 fused_ordering(515) 00:11:30.868 fused_ordering(516) 00:11:30.868 fused_ordering(517) 00:11:30.868 fused_ordering(518) 00:11:30.868 fused_ordering(519) 00:11:30.868 fused_ordering(520) 00:11:30.868 fused_ordering(521) 00:11:30.868 fused_ordering(522) 00:11:30.868 fused_ordering(523) 00:11:30.868 fused_ordering(524) 00:11:30.868 fused_ordering(525) 00:11:30.868 fused_ordering(526) 00:11:30.868 fused_ordering(527) 00:11:30.868 fused_ordering(528) 00:11:30.868 fused_ordering(529) 00:11:30.868 fused_ordering(530) 00:11:30.868 fused_ordering(531) 00:11:30.868 fused_ordering(532) 00:11:30.868 fused_ordering(533) 00:11:30.868 fused_ordering(534) 00:11:30.868 fused_ordering(535) 00:11:30.868 fused_ordering(536) 00:11:30.868 fused_ordering(537) 00:11:30.868 fused_ordering(538) 00:11:30.868 fused_ordering(539) 00:11:30.868 fused_ordering(540) 00:11:30.868 fused_ordering(541) 00:11:30.868 fused_ordering(542) 00:11:30.868 fused_ordering(543) 00:11:30.868 fused_ordering(544) 00:11:30.868 fused_ordering(545) 00:11:30.868 fused_ordering(546) 00:11:30.868 fused_ordering(547) 00:11:30.868 fused_ordering(548) 00:11:30.868 fused_ordering(549) 00:11:30.868 fused_ordering(550) 00:11:30.868 fused_ordering(551) 00:11:30.868 fused_ordering(552) 00:11:30.868 fused_ordering(553) 00:11:30.868 fused_ordering(554) 00:11:30.868 fused_ordering(555) 00:11:30.868 fused_ordering(556) 00:11:30.868 fused_ordering(557) 00:11:30.868 fused_ordering(558) 00:11:30.868 fused_ordering(559) 00:11:30.868 fused_ordering(560) 00:11:30.868 fused_ordering(561) 00:11:30.868 fused_ordering(562) 00:11:30.868 fused_ordering(563) 00:11:30.868 fused_ordering(564) 00:11:30.868 fused_ordering(565) 00:11:30.868 fused_ordering(566) 00:11:30.868 fused_ordering(567) 00:11:30.868 fused_ordering(568) 00:11:30.868 fused_ordering(569) 00:11:30.868 fused_ordering(570) 00:11:30.868 fused_ordering(571) 00:11:30.868 fused_ordering(572) 00:11:30.868 fused_ordering(573) 00:11:30.868 fused_ordering(574) 00:11:30.868 fused_ordering(575) 00:11:30.868 fused_ordering(576) 00:11:30.868 fused_ordering(577) 00:11:30.868 fused_ordering(578) 00:11:30.868 fused_ordering(579) 00:11:30.868 fused_ordering(580) 00:11:30.868 fused_ordering(581) 00:11:30.868 fused_ordering(582) 00:11:30.868 fused_ordering(583) 00:11:30.868 fused_ordering(584) 00:11:30.868 fused_ordering(585) 00:11:30.868 fused_ordering(586) 00:11:30.868 fused_ordering(587) 00:11:30.868 fused_ordering(588) 00:11:30.868 fused_ordering(589) 00:11:30.868 fused_ordering(590) 00:11:30.868 fused_ordering(591) 00:11:30.868 fused_ordering(592) 00:11:30.868 fused_ordering(593) 00:11:30.868 fused_ordering(594) 00:11:30.868 fused_ordering(595) 00:11:30.868 fused_ordering(596) 00:11:30.868 fused_ordering(597) 00:11:30.868 fused_ordering(598) 00:11:30.868 fused_ordering(599) 00:11:30.868 fused_ordering(600) 00:11:30.868 fused_ordering(601) 00:11:30.868 fused_ordering(602) 00:11:30.868 fused_ordering(603) 00:11:30.868 fused_ordering(604) 00:11:30.868 fused_ordering(605) 00:11:30.868 fused_ordering(606) 00:11:30.868 fused_ordering(607) 00:11:30.868 fused_ordering(608) 00:11:30.868 fused_ordering(609) 00:11:30.868 fused_ordering(610) 00:11:30.868 fused_ordering(611) 00:11:30.868 fused_ordering(612) 00:11:30.868 fused_ordering(613) 00:11:30.868 fused_ordering(614) 00:11:30.868 fused_ordering(615) 00:11:31.455 fused_ordering(616) 00:11:31.455 fused_ordering(617) 00:11:31.455 fused_ordering(618) 00:11:31.455 fused_ordering(619) 00:11:31.455 fused_ordering(620) 00:11:31.455 fused_ordering(621) 00:11:31.455 fused_ordering(622) 00:11:31.455 fused_ordering(623) 00:11:31.455 fused_ordering(624) 00:11:31.455 fused_ordering(625) 00:11:31.455 fused_ordering(626) 00:11:31.455 fused_ordering(627) 00:11:31.455 fused_ordering(628) 00:11:31.455 fused_ordering(629) 00:11:31.455 fused_ordering(630) 00:11:31.455 fused_ordering(631) 00:11:31.455 fused_ordering(632) 00:11:31.455 fused_ordering(633) 00:11:31.455 fused_ordering(634) 00:11:31.455 fused_ordering(635) 00:11:31.455 fused_ordering(636) 00:11:31.455 fused_ordering(637) 00:11:31.455 fused_ordering(638) 00:11:31.455 fused_ordering(639) 00:11:31.455 fused_ordering(640) 00:11:31.455 fused_ordering(641) 00:11:31.455 fused_ordering(642) 00:11:31.455 fused_ordering(643) 00:11:31.455 fused_ordering(644) 00:11:31.456 fused_ordering(645) 00:11:31.456 fused_ordering(646) 00:11:31.456 fused_ordering(647) 00:11:31.456 fused_ordering(648) 00:11:31.456 fused_ordering(649) 00:11:31.456 fused_ordering(650) 00:11:31.456 fused_ordering(651) 00:11:31.456 fused_ordering(652) 00:11:31.456 fused_ordering(653) 00:11:31.456 fused_ordering(654) 00:11:31.456 fused_ordering(655) 00:11:31.456 fused_ordering(656) 00:11:31.456 fused_ordering(657) 00:11:31.456 fused_ordering(658) 00:11:31.456 fused_ordering(659) 00:11:31.456 fused_ordering(660) 00:11:31.456 fused_ordering(661) 00:11:31.456 fused_ordering(662) 00:11:31.456 fused_ordering(663) 00:11:31.456 fused_ordering(664) 00:11:31.456 fused_ordering(665) 00:11:31.456 fused_ordering(666) 00:11:31.456 fused_ordering(667) 00:11:31.456 fused_ordering(668) 00:11:31.456 fused_ordering(669) 00:11:31.456 fused_ordering(670) 00:11:31.456 fused_ordering(671) 00:11:31.456 fused_ordering(672) 00:11:31.456 fused_ordering(673) 00:11:31.456 fused_ordering(674) 00:11:31.456 fused_ordering(675) 00:11:31.456 fused_ordering(676) 00:11:31.456 fused_ordering(677) 00:11:31.456 fused_ordering(678) 00:11:31.456 fused_ordering(679) 00:11:31.456 fused_ordering(680) 00:11:31.456 fused_ordering(681) 00:11:31.456 fused_ordering(682) 00:11:31.456 fused_ordering(683) 00:11:31.456 fused_ordering(684) 00:11:31.456 fused_ordering(685) 00:11:31.456 fused_ordering(686) 00:11:31.456 fused_ordering(687) 00:11:31.456 fused_ordering(688) 00:11:31.456 fused_ordering(689) 00:11:31.456 fused_ordering(690) 00:11:31.456 fused_ordering(691) 00:11:31.456 fused_ordering(692) 00:11:31.456 fused_ordering(693) 00:11:31.456 fused_ordering(694) 00:11:31.456 fused_ordering(695) 00:11:31.456 fused_ordering(696) 00:11:31.456 fused_ordering(697) 00:11:31.456 fused_ordering(698) 00:11:31.456 fused_ordering(699) 00:11:31.456 fused_ordering(700) 00:11:31.456 fused_ordering(701) 00:11:31.456 fused_ordering(702) 00:11:31.456 fused_ordering(703) 00:11:31.456 fused_ordering(704) 00:11:31.456 fused_ordering(705) 00:11:31.456 fused_ordering(706) 00:11:31.456 fused_ordering(707) 00:11:31.456 fused_ordering(708) 00:11:31.456 fused_ordering(709) 00:11:31.456 fused_ordering(710) 00:11:31.456 fused_ordering(711) 00:11:31.456 fused_ordering(712) 00:11:31.456 fused_ordering(713) 00:11:31.456 fused_ordering(714) 00:11:31.456 fused_ordering(715) 00:11:31.456 fused_ordering(716) 00:11:31.456 fused_ordering(717) 00:11:31.456 fused_ordering(718) 00:11:31.456 fused_ordering(719) 00:11:31.456 fused_ordering(720) 00:11:31.456 fused_ordering(721) 00:11:31.456 fused_ordering(722) 00:11:31.456 fused_ordering(723) 00:11:31.456 fused_ordering(724) 00:11:31.456 fused_ordering(725) 00:11:31.456 fused_ordering(726) 00:11:31.456 fused_ordering(727) 00:11:31.456 fused_ordering(728) 00:11:31.456 fused_ordering(729) 00:11:31.456 fused_ordering(730) 00:11:31.456 fused_ordering(731) 00:11:31.456 fused_ordering(732) 00:11:31.456 fused_ordering(733) 00:11:31.456 fused_ordering(734) 00:11:31.456 fused_ordering(735) 00:11:31.456 fused_ordering(736) 00:11:31.456 fused_ordering(737) 00:11:31.456 fused_ordering(738) 00:11:31.456 fused_ordering(739) 00:11:31.456 fused_ordering(740) 00:11:31.456 fused_ordering(741) 00:11:31.456 fused_ordering(742) 00:11:31.456 fused_ordering(743) 00:11:31.456 fused_ordering(744) 00:11:31.456 fused_ordering(745) 00:11:31.456 fused_ordering(746) 00:11:31.456 fused_ordering(747) 00:11:31.456 fused_ordering(748) 00:11:31.456 fused_ordering(749) 00:11:31.456 fused_ordering(750) 00:11:31.456 fused_ordering(751) 00:11:31.456 fused_ordering(752) 00:11:31.456 fused_ordering(753) 00:11:31.456 fused_ordering(754) 00:11:31.456 fused_ordering(755) 00:11:31.456 fused_ordering(756) 00:11:31.456 fused_ordering(757) 00:11:31.456 fused_ordering(758) 00:11:31.456 fused_ordering(759) 00:11:31.456 fused_ordering(760) 00:11:31.456 fused_ordering(761) 00:11:31.456 fused_ordering(762) 00:11:31.456 fused_ordering(763) 00:11:31.456 fused_ordering(764) 00:11:31.456 fused_ordering(765) 00:11:31.456 fused_ordering(766) 00:11:31.456 fused_ordering(767) 00:11:31.456 fused_ordering(768) 00:11:31.456 fused_ordering(769) 00:11:31.456 fused_ordering(770) 00:11:31.456 fused_ordering(771) 00:11:31.456 fused_ordering(772) 00:11:31.456 fused_ordering(773) 00:11:31.456 fused_ordering(774) 00:11:31.456 fused_ordering(775) 00:11:31.456 fused_ordering(776) 00:11:31.456 fused_ordering(777) 00:11:31.456 fused_ordering(778) 00:11:31.456 fused_ordering(779) 00:11:31.456 fused_ordering(780) 00:11:31.456 fused_ordering(781) 00:11:31.456 fused_ordering(782) 00:11:31.456 fused_ordering(783) 00:11:31.456 fused_ordering(784) 00:11:31.456 fused_ordering(785) 00:11:31.456 fused_ordering(786) 00:11:31.456 fused_ordering(787) 00:11:31.456 fused_ordering(788) 00:11:31.456 fused_ordering(789) 00:11:31.456 fused_ordering(790) 00:11:31.456 fused_ordering(791) 00:11:31.456 fused_ordering(792) 00:11:31.456 fused_ordering(793) 00:11:31.456 fused_ordering(794) 00:11:31.456 fused_ordering(795) 00:11:31.456 fused_ordering(796) 00:11:31.456 fused_ordering(797) 00:11:31.456 fused_ordering(798) 00:11:31.456 fused_ordering(799) 00:11:31.456 fused_ordering(800) 00:11:31.456 fused_ordering(801) 00:11:31.456 fused_ordering(802) 00:11:31.456 fused_ordering(803) 00:11:31.456 fused_ordering(804) 00:11:31.456 fused_ordering(805) 00:11:31.456 fused_ordering(806) 00:11:31.456 fused_ordering(807) 00:11:31.456 fused_ordering(808) 00:11:31.456 fused_ordering(809) 00:11:31.456 fused_ordering(810) 00:11:31.456 fused_ordering(811) 00:11:31.456 fused_ordering(812) 00:11:31.456 fused_ordering(813) 00:11:31.456 fused_ordering(814) 00:11:31.456 fused_ordering(815) 00:11:31.456 fused_ordering(816) 00:11:31.456 fused_ordering(817) 00:11:31.456 fused_ordering(818) 00:11:31.456 fused_ordering(819) 00:11:31.456 fused_ordering(820) 00:11:32.400 fused_ordering(821) 00:11:32.400 fused_ordering(822) 00:11:32.400 fused_ordering(823) 00:11:32.400 fused_ordering(824) 00:11:32.400 fused_ordering(825) 00:11:32.400 fused_ordering(826) 00:11:32.400 fused_ordering(827) 00:11:32.400 fused_ordering(828) 00:11:32.400 fused_ordering(829) 00:11:32.400 fused_ordering(830) 00:11:32.400 fused_ordering(831) 00:11:32.400 fused_ordering(832) 00:11:32.400 fused_ordering(833) 00:11:32.400 fused_ordering(834) 00:11:32.400 fused_ordering(835) 00:11:32.400 fused_ordering(836) 00:11:32.400 fused_ordering(837) 00:11:32.400 fused_ordering(838) 00:11:32.400 fused_ordering(839) 00:11:32.400 fused_ordering(840) 00:11:32.400 fused_ordering(841) 00:11:32.400 fused_ordering(842) 00:11:32.400 fused_ordering(843) 00:11:32.400 fused_ordering(844) 00:11:32.400 fused_ordering(845) 00:11:32.400 fused_ordering(846) 00:11:32.400 fused_ordering(847) 00:11:32.400 fused_ordering(848) 00:11:32.400 fused_ordering(849) 00:11:32.400 fused_ordering(850) 00:11:32.400 fused_ordering(851) 00:11:32.400 fused_ordering(852) 00:11:32.400 fused_ordering(853) 00:11:32.400 fused_ordering(854) 00:11:32.400 fused_ordering(855) 00:11:32.400 fused_ordering(856) 00:11:32.400 fused_ordering(857) 00:11:32.400 fused_ordering(858) 00:11:32.400 fused_ordering(859) 00:11:32.400 fused_ordering(860) 00:11:32.400 fused_ordering(861) 00:11:32.400 fused_ordering(862) 00:11:32.400 fused_ordering(863) 00:11:32.400 fused_ordering(864) 00:11:32.400 fused_ordering(865) 00:11:32.400 fused_ordering(866) 00:11:32.400 fused_ordering(867) 00:11:32.400 fused_ordering(868) 00:11:32.400 fused_ordering(869) 00:11:32.400 fused_ordering(870) 00:11:32.400 fused_ordering(871) 00:11:32.400 fused_ordering(872) 00:11:32.400 fused_ordering(873) 00:11:32.400 fused_ordering(874) 00:11:32.400 fused_ordering(875) 00:11:32.400 fused_ordering(876) 00:11:32.400 fused_ordering(877) 00:11:32.400 fused_ordering(878) 00:11:32.400 fused_ordering(879) 00:11:32.400 fused_ordering(880) 00:11:32.400 fused_ordering(881) 00:11:32.400 fused_ordering(882) 00:11:32.400 fused_ordering(883) 00:11:32.400 fused_ordering(884) 00:11:32.400 fused_ordering(885) 00:11:32.400 fused_ordering(886) 00:11:32.400 fused_ordering(887) 00:11:32.400 fused_ordering(888) 00:11:32.400 fused_ordering(889) 00:11:32.400 fused_ordering(890) 00:11:32.400 fused_ordering(891) 00:11:32.400 fused_ordering(892) 00:11:32.400 fused_ordering(893) 00:11:32.400 fused_ordering(894) 00:11:32.400 fused_ordering(895) 00:11:32.400 fused_ordering(896) 00:11:32.400 fused_ordering(897) 00:11:32.400 fused_ordering(898) 00:11:32.400 fused_ordering(899) 00:11:32.400 fused_ordering(900) 00:11:32.400 fused_ordering(901) 00:11:32.400 fused_ordering(902) 00:11:32.400 fused_ordering(903) 00:11:32.400 fused_ordering(904) 00:11:32.400 fused_ordering(905) 00:11:32.400 fused_ordering(906) 00:11:32.400 fused_ordering(907) 00:11:32.400 fused_ordering(908) 00:11:32.400 fused_ordering(909) 00:11:32.400 fused_ordering(910) 00:11:32.400 fused_ordering(911) 00:11:32.400 fused_ordering(912) 00:11:32.400 fused_ordering(913) 00:11:32.400 fused_ordering(914) 00:11:32.400 fused_ordering(915) 00:11:32.400 fused_ordering(916) 00:11:32.400 fused_ordering(917) 00:11:32.400 fused_ordering(918) 00:11:32.400 fused_ordering(919) 00:11:32.400 fused_ordering(920) 00:11:32.400 fused_ordering(921) 00:11:32.400 fused_ordering(922) 00:11:32.400 fused_ordering(923) 00:11:32.400 fused_ordering(924) 00:11:32.400 fused_ordering(925) 00:11:32.401 fused_ordering(926) 00:11:32.401 fused_ordering(927) 00:11:32.401 fused_ordering(928) 00:11:32.401 fused_ordering(929) 00:11:32.401 fused_ordering(930) 00:11:32.401 fused_ordering(931) 00:11:32.401 fused_ordering(932) 00:11:32.401 fused_ordering(933) 00:11:32.401 fused_ordering(934) 00:11:32.401 fused_ordering(935) 00:11:32.401 fused_ordering(936) 00:11:32.401 fused_ordering(937) 00:11:32.401 fused_ordering(938) 00:11:32.401 fused_ordering(939) 00:11:32.401 fused_ordering(940) 00:11:32.401 fused_ordering(941) 00:11:32.401 fused_ordering(942) 00:11:32.401 fused_ordering(943) 00:11:32.401 fused_ordering(944) 00:11:32.401 fused_ordering(945) 00:11:32.401 fused_ordering(946) 00:11:32.401 fused_ordering(947) 00:11:32.401 fused_ordering(948) 00:11:32.401 fused_ordering(949) 00:11:32.401 fused_ordering(950) 00:11:32.401 fused_ordering(951) 00:11:32.401 fused_ordering(952) 00:11:32.401 fused_ordering(953) 00:11:32.401 fused_ordering(954) 00:11:32.401 fused_ordering(955) 00:11:32.401 fused_ordering(956) 00:11:32.401 fused_ordering(957) 00:11:32.401 fused_ordering(958) 00:11:32.401 fused_ordering(959) 00:11:32.401 fused_ordering(960) 00:11:32.401 fused_ordering(961) 00:11:32.401 fused_ordering(962) 00:11:32.401 fused_ordering(963) 00:11:32.401 fused_ordering(964) 00:11:32.401 fused_ordering(965) 00:11:32.401 fused_ordering(966) 00:11:32.401 fused_ordering(967) 00:11:32.401 fused_ordering(968) 00:11:32.401 fused_ordering(969) 00:11:32.401 fused_ordering(970) 00:11:32.401 fused_ordering(971) 00:11:32.401 fused_ordering(972) 00:11:32.401 fused_ordering(973) 00:11:32.401 fused_ordering(974) 00:11:32.401 fused_ordering(975) 00:11:32.401 fused_ordering(976) 00:11:32.401 fused_ordering(977) 00:11:32.401 fused_ordering(978) 00:11:32.401 fused_ordering(979) 00:11:32.401 fused_ordering(980) 00:11:32.401 fused_ordering(981) 00:11:32.401 fused_ordering(982) 00:11:32.401 fused_ordering(983) 00:11:32.401 fused_ordering(984) 00:11:32.401 fused_ordering(985) 00:11:32.401 fused_ordering(986) 00:11:32.401 fused_ordering(987) 00:11:32.401 fused_ordering(988) 00:11:32.401 fused_ordering(989) 00:11:32.401 fused_ordering(990) 00:11:32.401 fused_ordering(991) 00:11:32.401 fused_ordering(992) 00:11:32.401 fused_ordering(993) 00:11:32.401 fused_ordering(994) 00:11:32.401 fused_ordering(995) 00:11:32.401 fused_ordering(996) 00:11:32.401 fused_ordering(997) 00:11:32.401 fused_ordering(998) 00:11:32.401 fused_ordering(999) 00:11:32.401 fused_ordering(1000) 00:11:32.401 fused_ordering(1001) 00:11:32.401 fused_ordering(1002) 00:11:32.401 fused_ordering(1003) 00:11:32.401 fused_ordering(1004) 00:11:32.401 fused_ordering(1005) 00:11:32.401 fused_ordering(1006) 00:11:32.401 fused_ordering(1007) 00:11:32.401 fused_ordering(1008) 00:11:32.401 fused_ordering(1009) 00:11:32.401 fused_ordering(1010) 00:11:32.401 fused_ordering(1011) 00:11:32.401 fused_ordering(1012) 00:11:32.401 fused_ordering(1013) 00:11:32.401 fused_ordering(1014) 00:11:32.401 fused_ordering(1015) 00:11:32.401 fused_ordering(1016) 00:11:32.401 fused_ordering(1017) 00:11:32.401 fused_ordering(1018) 00:11:32.401 fused_ordering(1019) 00:11:32.401 fused_ordering(1020) 00:11:32.401 fused_ordering(1021) 00:11:32.401 fused_ordering(1022) 00:11:32.401 fused_ordering(1023) 00:11:32.401 02:31:05 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:32.401 02:31:05 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:32.401 02:31:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:32.401 02:31:05 -- nvmf/common.sh@117 -- # sync 00:11:32.401 02:31:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.401 02:31:05 -- nvmf/common.sh@120 -- # set +e 00:11:32.401 02:31:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.401 02:31:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.401 rmmod nvme_tcp 00:11:32.401 rmmod nvme_fabrics 00:11:32.401 rmmod nvme_keyring 00:11:32.401 02:31:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.401 02:31:05 -- nvmf/common.sh@124 -- # set -e 00:11:32.401 02:31:05 -- nvmf/common.sh@125 -- # return 0 00:11:32.401 02:31:05 -- nvmf/common.sh@478 -- # '[' -n 17985 ']' 00:11:32.401 02:31:05 -- nvmf/common.sh@479 -- # killprocess 17985 00:11:32.401 02:31:05 -- common/autotest_common.sh@936 -- # '[' -z 17985 ']' 00:11:32.401 02:31:05 -- common/autotest_common.sh@940 -- # kill -0 17985 00:11:32.401 02:31:05 -- common/autotest_common.sh@941 -- # uname 00:11:32.401 02:31:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:32.401 02:31:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 17985 00:11:32.401 02:31:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:32.401 02:31:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:32.401 02:31:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 17985' 00:11:32.401 killing process with pid 17985 00:11:32.401 02:31:05 -- common/autotest_common.sh@955 -- # kill 17985 00:11:32.401 02:31:05 -- common/autotest_common.sh@960 -- # wait 17985 00:11:32.663 02:31:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:32.663 02:31:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:32.663 02:31:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:32.663 02:31:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.663 02:31:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.663 02:31:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.663 02:31:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.663 02:31:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.579 02:31:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.579 00:11:34.579 real 0m13.780s 00:11:34.579 user 0m7.973s 00:11:34.579 sys 0m7.519s 00:11:34.579 02:31:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.579 02:31:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.579 ************************************ 00:11:34.579 END TEST nvmf_fused_ordering 00:11:34.579 ************************************ 00:11:34.579 02:31:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:34.579 02:31:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:34.579 02:31:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.579 02:31:08 -- common/autotest_common.sh@10 -- # set +x 00:11:34.840 ************************************ 00:11:34.840 START TEST nvmf_delete_subsystem 00:11:34.840 ************************************ 00:11:34.840 02:31:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:34.840 * Looking for test storage... 00:11:34.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.840 02:31:08 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.840 02:31:08 -- nvmf/common.sh@7 -- # uname -s 00:11:34.840 02:31:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.840 02:31:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.840 02:31:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.840 02:31:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.840 02:31:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.840 02:31:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.840 02:31:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.840 02:31:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.840 02:31:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.840 02:31:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.840 02:31:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.840 02:31:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.840 02:31:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.840 02:31:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.840 02:31:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.840 02:31:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.840 02:31:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.840 02:31:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.840 02:31:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.840 02:31:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.840 02:31:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.840 02:31:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.841 02:31:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.841 02:31:08 -- paths/export.sh@5 -- # export PATH 00:11:34.841 02:31:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.841 02:31:08 -- nvmf/common.sh@47 -- # : 0 00:11:34.841 02:31:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.841 02:31:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.841 02:31:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.841 02:31:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.841 02:31:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.841 02:31:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.841 02:31:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.841 02:31:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.841 02:31:08 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:34.841 02:31:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:34.841 02:31:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.841 02:31:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:34.841 02:31:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:34.841 02:31:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:34.841 02:31:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.841 02:31:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.841 02:31:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.841 02:31:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:34.841 02:31:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:34.841 02:31:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.841 02:31:08 -- common/autotest_common.sh@10 -- # set +x 00:11:42.986 02:31:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:42.986 02:31:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:42.986 02:31:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:42.986 02:31:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:42.986 02:31:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:42.986 02:31:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:42.986 02:31:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:42.986 02:31:15 -- nvmf/common.sh@295 -- # net_devs=() 00:11:42.986 02:31:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:42.986 02:31:15 -- nvmf/common.sh@296 -- # e810=() 00:11:42.986 02:31:15 -- nvmf/common.sh@296 -- # local -ga e810 00:11:42.986 02:31:15 -- nvmf/common.sh@297 -- # x722=() 00:11:42.986 02:31:15 -- nvmf/common.sh@297 -- # local -ga x722 00:11:42.986 02:31:15 -- nvmf/common.sh@298 -- # mlx=() 00:11:42.986 02:31:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:42.986 02:31:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.986 02:31:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:42.986 02:31:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:42.986 02:31:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:42.986 02:31:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.986 02:31:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.986 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.986 02:31:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.986 02:31:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.986 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.986 02:31:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:42.986 02:31:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.986 02:31:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.986 02:31:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:42.986 02:31:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.986 02:31:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.986 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.986 02:31:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.986 02:31:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.986 02:31:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.986 02:31:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:42.986 02:31:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.986 02:31:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.986 02:31:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.986 02:31:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:42.986 02:31:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:42.986 02:31:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:42.986 02:31:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:42.986 02:31:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.986 02:31:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.986 02:31:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.986 02:31:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:42.986 02:31:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.986 02:31:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.986 02:31:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:42.986 02:31:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.987 02:31:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.987 02:31:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:42.987 02:31:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:42.987 02:31:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.987 02:31:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.987 02:31:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.987 02:31:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.987 02:31:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:42.987 02:31:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.987 02:31:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.987 02:31:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.987 02:31:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:42.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:11:42.987 00:11:42.987 --- 10.0.0.2 ping statistics --- 00:11:42.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.987 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:11:42.987 02:31:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:11:42.987 00:11:42.987 --- 10.0.0.1 ping statistics --- 00:11:42.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.987 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:11:42.987 02:31:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.987 02:31:15 -- nvmf/common.sh@411 -- # return 0 00:11:42.987 02:31:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:42.987 02:31:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.987 02:31:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:42.987 02:31:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:42.987 02:31:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.987 02:31:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:42.987 02:31:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:42.987 02:31:15 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:42.987 02:31:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:42.987 02:31:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:42.987 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 02:31:15 -- nvmf/common.sh@470 -- # nvmfpid=23017 00:11:42.987 02:31:15 -- nvmf/common.sh@471 -- # waitforlisten 23017 00:11:42.987 02:31:15 -- common/autotest_common.sh@817 -- # '[' -z 23017 ']' 00:11:42.987 02:31:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:42.987 02:31:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.987 02:31:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:42.987 02:31:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.987 02:31:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:42.987 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 [2024-04-27 02:31:15.577414] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:42.987 [2024-04-27 02:31:15.577477] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.987 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.987 [2024-04-27 02:31:15.648417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:42.987 [2024-04-27 02:31:15.720799] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.987 [2024-04-27 02:31:15.720837] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.987 [2024-04-27 02:31:15.720845] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.987 [2024-04-27 02:31:15.720851] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.987 [2024-04-27 02:31:15.720857] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.987 [2024-04-27 02:31:15.720962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.987 [2024-04-27 02:31:15.720967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.987 02:31:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:42.987 02:31:16 -- common/autotest_common.sh@850 -- # return 0 00:11:42.987 02:31:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:42.987 02:31:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 02:31:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.987 02:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 [2024-04-27 02:31:16.393225] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.987 02:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.987 02:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 02:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.987 02:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 [2024-04-27 02:31:16.409390] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.987 02:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:42.987 02:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 NULL1 00:11:42.987 02:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:42.987 02:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 Delay0 00:11:42.987 02:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.987 02:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.987 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:42.987 02:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@28 -- # perf_pid=23162 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:42.987 02:31:16 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:42.987 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.987 [2024-04-27 02:31:16.494000] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:44.899 02:31:18 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.899 02:31:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:44.899 02:31:18 -- common/autotest_common.sh@10 -- # set +x 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 starting I/O failed: -6 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 [2024-04-27 02:31:18.622811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a7590 is same with the state(5) to be set 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Read completed with error (sct=0, sc=8) 00:11:45.159 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 [2024-04-27 02:31:18.623806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a7110 is same with the state(5) to be set 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 starting I/O failed: -6 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 [2024-04-27 02:31:18.624706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c6c000c00 is same with the state(5) to be set 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Write completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:45.160 Read completed with error (sct=0, sc=8) 00:11:46.102 [2024-04-27 02:31:19.593304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a6140 is same with the state(5) to be set 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 [2024-04-27 02:31:19.626245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a7850 is same with the state(5) to be set 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 [2024-04-27 02:31:19.627364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c6c00bf90 is same with the state(5) to be set 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 [2024-04-27 02:31:19.627463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a7400 is same with the state(5) to be set 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 Write completed with error (sct=0, sc=8) 00:11:46.102 Read completed with error (sct=0, sc=8) 00:11:46.102 [2024-04-27 02:31:19.627660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c6c00c690 is same with the state(5) to be set 00:11:46.102 [2024-04-27 02:31:19.628173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a6140 (9): Bad file descriptor 00:11:46.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:46.102 02:31:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.102 02:31:19 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:46.102 02:31:19 -- target/delete_subsystem.sh@35 -- # kill -0 23162 00:11:46.102 02:31:19 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:46.102 Initializing NVMe Controllers 00:11:46.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.102 Controller IO queue size 128, less than required. 00:11:46.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:46.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:46.102 Initialization complete. Launching workers. 00:11:46.102 ======================================================== 00:11:46.102 Latency(us) 00:11:46.103 Device Information : IOPS MiB/s Average min max 00:11:46.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.96 0.08 889258.94 716.22 1010893.77 00:11:46.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.06 0.08 983338.84 285.27 2002735.49 00:11:46.103 ======================================================== 00:11:46.103 Total : 327.01 0.16 933868.25 285.27 2002735.49 00:11:46.103 00:11:46.674 02:31:20 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:46.674 02:31:20 -- target/delete_subsystem.sh@35 -- # kill -0 23162 00:11:46.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (23162) - No such process 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@45 -- # NOT wait 23162 00:11:46.675 02:31:20 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.675 02:31:20 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 23162 00:11:46.675 02:31:20 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:46.675 02:31:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.675 02:31:20 -- common/autotest_common.sh@630 -- # type -t wait 00:11:46.675 02:31:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.675 02:31:20 -- common/autotest_common.sh@641 -- # wait 23162 00:11:46.675 02:31:20 -- common/autotest_common.sh@641 -- # es=1 00:11:46.675 02:31:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.675 02:31:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.675 02:31:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.675 02:31:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.675 02:31:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.675 02:31:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.675 02:31:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.675 02:31:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.675 [2024-04-27 02:31:20.158978] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.675 02:31:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.675 02:31:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.675 02:31:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.675 02:31:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@54 -- # perf_pid=23981 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:46.675 02:31:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.675 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.675 [2024-04-27 02:31:20.225401] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:47.246 02:31:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.246 02:31:20 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:47.246 02:31:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.818 02:31:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.819 02:31:21 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:47.819 02:31:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.080 02:31:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.080 02:31:21 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:48.080 02:31:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.651 02:31:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.651 02:31:22 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:48.651 02:31:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.221 02:31:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.221 02:31:22 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:49.221 02:31:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.795 02:31:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.795 02:31:23 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:49.795 02:31:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.795 Initializing NVMe Controllers 00:11:49.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.795 Controller IO queue size 128, less than required. 00:11:49.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:49.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:49.795 Initialization complete. Launching workers. 00:11:49.795 ======================================================== 00:11:49.795 Latency(us) 00:11:49.795 Device Information : IOPS MiB/s Average min max 00:11:49.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003696.85 1000239.86 1011232.06 00:11:49.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006059.34 1000439.18 1014563.39 00:11:49.796 ======================================================== 00:11:49.796 Total : 256.00 0.12 1004878.09 1000239.86 1014563.39 00:11:49.796 00:11:50.366 02:31:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.366 02:31:23 -- target/delete_subsystem.sh@57 -- # kill -0 23981 00:11:50.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (23981) - No such process 00:11:50.366 02:31:23 -- target/delete_subsystem.sh@67 -- # wait 23981 00:11:50.366 02:31:23 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:50.366 02:31:23 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:50.366 02:31:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:50.366 02:31:23 -- nvmf/common.sh@117 -- # sync 00:11:50.366 02:31:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.366 02:31:23 -- nvmf/common.sh@120 -- # set +e 00:11:50.366 02:31:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.366 02:31:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.366 rmmod nvme_tcp 00:11:50.366 rmmod nvme_fabrics 00:11:50.366 rmmod nvme_keyring 00:11:50.366 02:31:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.366 02:31:23 -- nvmf/common.sh@124 -- # set -e 00:11:50.366 02:31:23 -- nvmf/common.sh@125 -- # return 0 00:11:50.366 02:31:23 -- nvmf/common.sh@478 -- # '[' -n 23017 ']' 00:11:50.366 02:31:23 -- nvmf/common.sh@479 -- # killprocess 23017 00:11:50.366 02:31:23 -- common/autotest_common.sh@936 -- # '[' -z 23017 ']' 00:11:50.366 02:31:23 -- common/autotest_common.sh@940 -- # kill -0 23017 00:11:50.366 02:31:23 -- common/autotest_common.sh@941 -- # uname 00:11:50.366 02:31:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.366 02:31:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 23017 00:11:50.366 02:31:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:50.366 02:31:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:50.366 02:31:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 23017' 00:11:50.366 killing process with pid 23017 00:11:50.366 02:31:23 -- common/autotest_common.sh@955 -- # kill 23017 00:11:50.366 02:31:23 -- common/autotest_common.sh@960 -- # wait 23017 00:11:50.366 02:31:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:50.366 02:31:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:50.366 02:31:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:50.366 02:31:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.366 02:31:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.366 02:31:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.366 02:31:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.366 02:31:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.913 02:31:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.913 00:11:52.913 real 0m17.734s 00:11:52.913 user 0m30.578s 00:11:52.913 sys 0m6.183s 00:11:52.913 02:31:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:52.913 02:31:26 -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 ************************************ 00:11:52.913 END TEST nvmf_delete_subsystem 00:11:52.913 ************************************ 00:11:52.913 02:31:26 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:52.913 02:31:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.913 02:31:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.913 02:31:26 -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 ************************************ 00:11:52.913 START TEST nvmf_ns_masking 00:11:52.913 ************************************ 00:11:52.913 02:31:26 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:52.913 * Looking for test storage... 00:11:52.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.914 02:31:26 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.914 02:31:26 -- nvmf/common.sh@7 -- # uname -s 00:11:52.914 02:31:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.914 02:31:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.914 02:31:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.914 02:31:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.914 02:31:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.914 02:31:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.914 02:31:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.914 02:31:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.914 02:31:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.914 02:31:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.914 02:31:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.914 02:31:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.914 02:31:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.914 02:31:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.914 02:31:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.914 02:31:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.914 02:31:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.914 02:31:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.914 02:31:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.914 02:31:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.914 02:31:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.914 02:31:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.914 02:31:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.914 02:31:26 -- paths/export.sh@5 -- # export PATH 00:11:52.914 02:31:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.914 02:31:26 -- nvmf/common.sh@47 -- # : 0 00:11:52.914 02:31:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.914 02:31:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.914 02:31:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.914 02:31:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.914 02:31:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.914 02:31:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.914 02:31:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.914 02:31:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.914 02:31:26 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.914 02:31:26 -- target/ns_masking.sh@11 -- # loops=5 00:11:52.914 02:31:26 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:52.914 02:31:26 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:52.914 02:31:26 -- target/ns_masking.sh@15 -- # uuidgen 00:11:52.914 02:31:26 -- target/ns_masking.sh@15 -- # HOSTID=648aafc2-5cb3-4a96-94f7-df1a0e78f862 00:11:52.914 02:31:26 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:52.914 02:31:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:52.914 02:31:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.914 02:31:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:52.914 02:31:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:52.914 02:31:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:52.914 02:31:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.914 02:31:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.914 02:31:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.914 02:31:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:52.914 02:31:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:52.914 02:31:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.914 02:31:26 -- common/autotest_common.sh@10 -- # set +x 00:11:59.505 02:31:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:59.505 02:31:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.505 02:31:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.505 02:31:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.505 02:31:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.505 02:31:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.505 02:31:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.505 02:31:33 -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.505 02:31:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.505 02:31:33 -- nvmf/common.sh@296 -- # e810=() 00:11:59.505 02:31:33 -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.505 02:31:33 -- nvmf/common.sh@297 -- # x722=() 00:11:59.505 02:31:33 -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.505 02:31:33 -- nvmf/common.sh@298 -- # mlx=() 00:11:59.505 02:31:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.505 02:31:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.505 02:31:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.505 02:31:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.505 02:31:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.505 02:31:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.505 02:31:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:59.505 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:59.505 02:31:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.505 02:31:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:59.505 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:59.505 02:31:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.505 02:31:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.505 02:31:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.505 02:31:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:59.505 02:31:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.505 02:31:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:59.505 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:59.505 02:31:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.505 02:31:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.505 02:31:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.505 02:31:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:59.505 02:31:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.505 02:31:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:59.505 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:59.505 02:31:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.505 02:31:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:59.505 02:31:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:59.505 02:31:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:59.505 02:31:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:59.505 02:31:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.505 02:31:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.505 02:31:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.505 02:31:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.505 02:31:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.505 02:31:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.505 02:31:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.505 02:31:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.505 02:31:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.505 02:31:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.505 02:31:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.505 02:31:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.505 02:31:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.766 02:31:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.766 02:31:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.766 02:31:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.766 02:31:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.766 02:31:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.766 02:31:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.766 02:31:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:11:59.766 00:11:59.766 --- 10.0.0.2 ping statistics --- 00:11:59.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.766 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:11:59.766 02:31:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:11:59.766 00:11:59.766 --- 10.0.0.1 ping statistics --- 00:11:59.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.766 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:11:59.766 02:31:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.766 02:31:33 -- nvmf/common.sh@411 -- # return 0 00:11:59.766 02:31:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:59.766 02:31:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.766 02:31:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:59.766 02:31:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:59.766 02:31:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.766 02:31:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:59.766 02:31:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:59.766 02:31:33 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:59.766 02:31:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:59.766 02:31:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:59.766 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:11:59.766 02:31:33 -- nvmf/common.sh@470 -- # nvmfpid=28726 00:11:59.766 02:31:33 -- nvmf/common.sh@471 -- # waitforlisten 28726 00:11:59.766 02:31:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.766 02:31:33 -- common/autotest_common.sh@817 -- # '[' -z 28726 ']' 00:11:59.766 02:31:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.766 02:31:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.766 02:31:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.766 02:31:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.766 02:31:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.027 [2024-04-27 02:31:33.418996] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:00.027 [2024-04-27 02:31:33.419048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.027 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.027 [2024-04-27 02:31:33.485884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.027 [2024-04-27 02:31:33.553477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.027 [2024-04-27 02:31:33.553513] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.027 [2024-04-27 02:31:33.553523] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.027 [2024-04-27 02:31:33.553531] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.027 [2024-04-27 02:31:33.553541] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.027 [2024-04-27 02:31:33.553657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.027 [2024-04-27 02:31:33.553774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.027 [2024-04-27 02:31:33.553901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.027 [2024-04-27 02:31:33.553904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.597 02:31:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:00.597 02:31:34 -- common/autotest_common.sh@850 -- # return 0 00:12:00.597 02:31:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:00.597 02:31:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:00.597 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:12:00.859 02:31:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.859 02:31:34 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.859 [2024-04-27 02:31:34.372355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.859 02:31:34 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:00.859 02:31:34 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:00.859 02:31:34 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.120 Malloc1 00:12:01.120 02:31:34 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:01.120 Malloc2 00:12:01.380 02:31:34 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.380 02:31:34 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:01.641 02:31:35 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.641 [2024-04-27 02:31:35.225431] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.641 02:31:35 -- target/ns_masking.sh@61 -- # connect 00:12:01.641 02:31:35 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 648aafc2-5cb3-4a96-94f7-df1a0e78f862 -a 10.0.0.2 -s 4420 -i 4 00:12:01.902 02:31:35 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.902 02:31:35 -- common/autotest_common.sh@1184 -- # local i=0 00:12:01.902 02:31:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.902 02:31:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:01.902 02:31:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:03.814 02:31:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:03.814 02:31:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:03.814 02:31:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.814 02:31:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:03.814 02:31:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.814 02:31:37 -- common/autotest_common.sh@1194 -- # return 0 00:12:03.814 02:31:37 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:03.814 02:31:37 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.074 02:31:37 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:04.074 02:31:37 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:04.074 02:31:37 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:04.074 02:31:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.074 02:31:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:04.074 [ 0]:0x1 00:12:04.074 02:31:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.074 02:31:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.074 02:31:37 -- target/ns_masking.sh@40 -- # nguid=87c8a38d12524011aee89895680163ea 00:12:04.074 02:31:37 -- target/ns_masking.sh@41 -- # [[ 87c8a38d12524011aee89895680163ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.074 02:31:37 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:04.074 02:31:37 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:04.075 02:31:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.075 02:31:37 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:04.075 [ 0]:0x1 00:12:04.075 02:31:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.075 02:31:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.335 02:31:37 -- target/ns_masking.sh@40 -- # nguid=87c8a38d12524011aee89895680163ea 00:12:04.335 02:31:37 -- target/ns_masking.sh@41 -- # [[ 87c8a38d12524011aee89895680163ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.335 02:31:37 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:04.335 02:31:37 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.335 02:31:37 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:04.335 [ 1]:0x2 00:12:04.335 02:31:37 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.335 02:31:37 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.335 02:31:37 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:04.335 02:31:37 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.335 02:31:37 -- target/ns_masking.sh@69 -- # disconnect 00:12:04.335 02:31:37 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.335 02:31:37 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.596 02:31:38 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:04.596 02:31:38 -- target/ns_masking.sh@77 -- # connect 1 00:12:04.596 02:31:38 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 648aafc2-5cb3-4a96-94f7-df1a0e78f862 -a 10.0.0.2 -s 4420 -i 4 00:12:04.857 02:31:38 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:04.857 02:31:38 -- common/autotest_common.sh@1184 -- # local i=0 00:12:04.857 02:31:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.857 02:31:38 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:12:04.857 02:31:38 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:12:04.857 02:31:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:06.771 02:31:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:06.771 02:31:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:06.771 02:31:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.771 02:31:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:06.771 02:31:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.771 02:31:40 -- common/autotest_common.sh@1194 -- # return 0 00:12:07.032 02:31:40 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:07.032 02:31:40 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.032 02:31:40 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:07.032 02:31:40 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:07.032 02:31:40 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:07.032 02:31:40 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.032 02:31:40 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.032 02:31:40 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:07.032 02:31:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.032 02:31:40 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:07.032 02:31:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.033 02:31:40 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:07.033 02:31:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.033 02:31:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.033 02:31:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.033 02:31:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.033 02:31:40 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.033 02:31:40 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.033 02:31:40 -- common/autotest_common.sh@641 -- # es=1 00:12:07.033 02:31:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.033 02:31:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.033 02:31:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.033 02:31:40 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:07.033 02:31:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.033 02:31:40 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.033 [ 0]:0x2 00:12:07.033 02:31:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.033 02:31:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.033 02:31:40 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:07.033 02:31:40 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.033 02:31:40 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.294 02:31:40 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:07.294 02:31:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.294 02:31:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.294 [ 0]:0x1 00:12:07.294 02:31:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.294 02:31:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.294 02:31:40 -- target/ns_masking.sh@40 -- # nguid=87c8a38d12524011aee89895680163ea 00:12:07.294 02:31:40 -- target/ns_masking.sh@41 -- # [[ 87c8a38d12524011aee89895680163ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.294 02:31:40 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:07.294 02:31:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.294 02:31:40 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.294 [ 1]:0x2 00:12:07.294 02:31:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.294 02:31:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.584 02:31:40 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:07.584 02:31:40 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.584 02:31:40 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.584 02:31:41 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:07.584 02:31:41 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.584 02:31:41 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.584 02:31:41 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:07.584 02:31:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.584 02:31:41 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:07.584 02:31:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.584 02:31:41 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:07.584 02:31:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.584 02:31:41 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.584 02:31:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.584 02:31:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.584 02:31:41 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.584 02:31:41 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.584 02:31:41 -- common/autotest_common.sh@641 -- # es=1 00:12:07.584 02:31:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.584 02:31:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.584 02:31:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.584 02:31:41 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:07.584 02:31:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.584 02:31:41 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.584 [ 0]:0x2 00:12:07.584 02:31:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.584 02:31:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.848 02:31:41 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:07.848 02:31:41 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.848 02:31:41 -- target/ns_masking.sh@91 -- # disconnect 00:12:07.848 02:31:41 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.848 02:31:41 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.848 02:31:41 -- target/ns_masking.sh@95 -- # connect 2 00:12:07.848 02:31:41 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 648aafc2-5cb3-4a96-94f7-df1a0e78f862 -a 10.0.0.2 -s 4420 -i 4 00:12:08.110 02:31:41 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:08.110 02:31:41 -- common/autotest_common.sh@1184 -- # local i=0 00:12:08.110 02:31:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.110 02:31:41 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:08.110 02:31:41 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:08.110 02:31:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:10.024 02:31:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:10.024 02:31:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:10.024 02:31:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.024 02:31:43 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:10.024 02:31:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.024 02:31:43 -- common/autotest_common.sh@1194 -- # return 0 00:12:10.024 02:31:43 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:10.024 02:31:43 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:10.285 02:31:43 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:10.285 02:31:43 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:10.285 02:31:43 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:10.285 02:31:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.285 02:31:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.285 [ 0]:0x1 00:12:10.285 02:31:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.285 02:31:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.285 02:31:43 -- target/ns_masking.sh@40 -- # nguid=87c8a38d12524011aee89895680163ea 00:12:10.285 02:31:43 -- target/ns_masking.sh@41 -- # [[ 87c8a38d12524011aee89895680163ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.285 02:31:43 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:10.285 02:31:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.285 02:31:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.285 [ 1]:0x2 00:12:10.285 02:31:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.285 02:31:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.285 02:31:43 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:10.285 02:31:43 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.285 02:31:43 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.547 02:31:44 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:10.547 02:31:44 -- common/autotest_common.sh@638 -- # local es=0 00:12:10.547 02:31:44 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.547 02:31:44 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:10.547 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.547 02:31:44 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:10.547 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.547 02:31:44 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:10.547 02:31:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.547 02:31:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.547 02:31:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.547 02:31:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.547 02:31:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.547 02:31:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.547 02:31:44 -- common/autotest_common.sh@641 -- # es=1 00:12:10.547 02:31:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:10.547 02:31:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:10.547 02:31:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:10.547 02:31:44 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:10.547 02:31:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.547 02:31:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.547 [ 0]:0x2 00:12:10.547 02:31:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.547 02:31:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.807 02:31:44 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:10.807 02:31:44 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.807 02:31:44 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.807 02:31:44 -- common/autotest_common.sh@638 -- # local es=0 00:12:10.807 02:31:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.807 02:31:44 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.807 02:31:44 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.807 02:31:44 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.807 02:31:44 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:10.807 02:31:44 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.807 [2024-04-27 02:31:44.323109] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:10.807 request: 00:12:10.807 { 00:12:10.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.807 "nsid": 2, 00:12:10.807 "host": "nqn.2016-06.io.spdk:host1", 00:12:10.807 "method": "nvmf_ns_remove_host", 00:12:10.807 "req_id": 1 00:12:10.807 } 00:12:10.807 Got JSON-RPC error response 00:12:10.807 response: 00:12:10.807 { 00:12:10.807 "code": -32602, 00:12:10.807 "message": "Invalid parameters" 00:12:10.807 } 00:12:10.807 02:31:44 -- common/autotest_common.sh@641 -- # es=1 00:12:10.807 02:31:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:10.807 02:31:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:10.807 02:31:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:10.807 02:31:44 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:10.807 02:31:44 -- common/autotest_common.sh@638 -- # local es=0 00:12:10.807 02:31:44 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.807 02:31:44 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:10.807 02:31:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.807 02:31:44 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:10.807 02:31:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.807 02:31:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.807 02:31:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.807 02:31:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.807 02:31:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.807 02:31:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.807 02:31:44 -- common/autotest_common.sh@641 -- # es=1 00:12:10.807 02:31:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:10.807 02:31:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:10.807 02:31:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:10.807 02:31:44 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:10.807 02:31:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.807 02:31:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.807 [ 0]:0x2 00:12:10.807 02:31:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.807 02:31:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:11.067 02:31:44 -- target/ns_masking.sh@40 -- # nguid=5125bd4d64374832b135c9a14a9bd97b 00:12:11.067 02:31:44 -- target/ns_masking.sh@41 -- # [[ 5125bd4d64374832b135c9a14a9bd97b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.067 02:31:44 -- target/ns_masking.sh@108 -- # disconnect 00:12:11.067 02:31:44 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.067 02:31:44 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.067 02:31:44 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:11.067 02:31:44 -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:11.067 02:31:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:11.067 02:31:44 -- nvmf/common.sh@117 -- # sync 00:12:11.067 02:31:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.067 02:31:44 -- nvmf/common.sh@120 -- # set +e 00:12:11.067 02:31:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.067 02:31:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.067 rmmod nvme_tcp 00:12:11.327 rmmod nvme_fabrics 00:12:11.327 rmmod nvme_keyring 00:12:11.327 02:31:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.327 02:31:44 -- nvmf/common.sh@124 -- # set -e 00:12:11.327 02:31:44 -- nvmf/common.sh@125 -- # return 0 00:12:11.327 02:31:44 -- nvmf/common.sh@478 -- # '[' -n 28726 ']' 00:12:11.327 02:31:44 -- nvmf/common.sh@479 -- # killprocess 28726 00:12:11.327 02:31:44 -- common/autotest_common.sh@936 -- # '[' -z 28726 ']' 00:12:11.327 02:31:44 -- common/autotest_common.sh@940 -- # kill -0 28726 00:12:11.327 02:31:44 -- common/autotest_common.sh@941 -- # uname 00:12:11.327 02:31:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:11.327 02:31:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 28726 00:12:11.327 02:31:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:11.327 02:31:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:11.327 02:31:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 28726' 00:12:11.327 killing process with pid 28726 00:12:11.327 02:31:44 -- common/autotest_common.sh@955 -- # kill 28726 00:12:11.327 02:31:44 -- common/autotest_common.sh@960 -- # wait 28726 00:12:11.327 02:31:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:11.327 02:31:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:11.327 02:31:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:11.327 02:31:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.327 02:31:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.327 02:31:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.327 02:31:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.588 02:31:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.503 02:31:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.503 00:12:13.503 real 0m20.793s 00:12:13.503 user 0m49.906s 00:12:13.503 sys 0m6.732s 00:12:13.503 02:31:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:13.503 02:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:13.503 ************************************ 00:12:13.503 END TEST nvmf_ns_masking 00:12:13.503 ************************************ 00:12:13.503 02:31:47 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:13.503 02:31:47 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.503 02:31:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.503 02:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.503 02:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:13.763 ************************************ 00:12:13.763 START TEST nvmf_nvme_cli 00:12:13.763 ************************************ 00:12:13.763 02:31:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.763 * Looking for test storage... 00:12:13.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.764 02:31:47 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.764 02:31:47 -- nvmf/common.sh@7 -- # uname -s 00:12:13.764 02:31:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.764 02:31:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.764 02:31:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.764 02:31:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.764 02:31:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.764 02:31:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.764 02:31:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.764 02:31:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.764 02:31:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.764 02:31:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.764 02:31:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.764 02:31:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.764 02:31:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.764 02:31:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.764 02:31:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.764 02:31:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.764 02:31:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.764 02:31:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.764 02:31:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.764 02:31:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.764 02:31:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.764 02:31:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.764 02:31:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.764 02:31:47 -- paths/export.sh@5 -- # export PATH 00:12:13.764 02:31:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.764 02:31:47 -- nvmf/common.sh@47 -- # : 0 00:12:13.764 02:31:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.764 02:31:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.764 02:31:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.764 02:31:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.764 02:31:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.764 02:31:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.764 02:31:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.764 02:31:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.764 02:31:47 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.764 02:31:47 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.764 02:31:47 -- target/nvme_cli.sh@14 -- # devs=() 00:12:13.764 02:31:47 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:13.764 02:31:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:13.764 02:31:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.764 02:31:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:13.764 02:31:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:13.764 02:31:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:13.764 02:31:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.764 02:31:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.764 02:31:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.764 02:31:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:13.764 02:31:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:13.764 02:31:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.764 02:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:21.916 02:31:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:21.916 02:31:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.916 02:31:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.916 02:31:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.916 02:31:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.916 02:31:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.916 02:31:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.916 02:31:54 -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.916 02:31:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.916 02:31:54 -- nvmf/common.sh@296 -- # e810=() 00:12:21.916 02:31:54 -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.916 02:31:54 -- nvmf/common.sh@297 -- # x722=() 00:12:21.916 02:31:54 -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.916 02:31:54 -- nvmf/common.sh@298 -- # mlx=() 00:12:21.916 02:31:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.916 02:31:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.916 02:31:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.916 02:31:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.916 02:31:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.916 02:31:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.916 02:31:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:21.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:21.916 02:31:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.916 02:31:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:21.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:21.916 02:31:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.916 02:31:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.916 02:31:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.916 02:31:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:21.916 02:31:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.916 02:31:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:21.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:21.916 02:31:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.916 02:31:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.916 02:31:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.916 02:31:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:21.916 02:31:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.916 02:31:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:21.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:21.916 02:31:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.916 02:31:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:21.916 02:31:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:21.916 02:31:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:21.916 02:31:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.916 02:31:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.916 02:31:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.916 02:31:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.916 02:31:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.916 02:31:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.916 02:31:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.916 02:31:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.916 02:31:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.916 02:31:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.916 02:31:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.916 02:31:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.916 02:31:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.916 02:31:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.916 02:31:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.916 02:31:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.916 02:31:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.916 02:31:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.916 02:31:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.916 02:31:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:12:21.916 00:12:21.916 --- 10.0.0.2 ping statistics --- 00:12:21.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.916 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:12:21.916 02:31:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:12:21.916 00:12:21.916 --- 10.0.0.1 ping statistics --- 00:12:21.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.916 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:12:21.916 02:31:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.916 02:31:54 -- nvmf/common.sh@411 -- # return 0 00:12:21.916 02:31:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:21.916 02:31:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.916 02:31:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:21.916 02:31:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.916 02:31:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:21.916 02:31:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:21.917 02:31:54 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:21.917 02:31:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:21.917 02:31:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:21.917 02:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 02:31:54 -- nvmf/common.sh@470 -- # nvmfpid=35248 00:12:21.917 02:31:54 -- nvmf/common.sh@471 -- # waitforlisten 35248 00:12:21.917 02:31:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.917 02:31:54 -- common/autotest_common.sh@817 -- # '[' -z 35248 ']' 00:12:21.917 02:31:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.917 02:31:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:21.917 02:31:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.917 02:31:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:21.917 02:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 [2024-04-27 02:31:54.424874] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:21.917 [2024-04-27 02:31:54.424920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.917 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.917 [2024-04-27 02:31:54.489533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.917 [2024-04-27 02:31:54.553407] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.917 [2024-04-27 02:31:54.553445] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.917 [2024-04-27 02:31:54.553454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.917 [2024-04-27 02:31:54.553461] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.917 [2024-04-27 02:31:54.553468] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.917 [2024-04-27 02:31:54.553576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.917 [2024-04-27 02:31:54.553690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.917 [2024-04-27 02:31:54.553816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.917 [2024-04-27 02:31:54.553819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.917 02:31:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:21.917 02:31:55 -- common/autotest_common.sh@850 -- # return 0 00:12:21.917 02:31:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:21.917 02:31:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 02:31:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.917 02:31:55 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 [2024-04-27 02:31:55.241999] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 Malloc0 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 Malloc1 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 [2024-04-27 02:31:55.331909] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.917 02:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.917 02:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.917 02:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.917 02:31:55 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:21.917 00:12:21.917 Discovery Log Number of Records 2, Generation counter 2 00:12:21.917 =====Discovery Log Entry 0====== 00:12:21.917 trtype: tcp 00:12:21.917 adrfam: ipv4 00:12:21.917 subtype: current discovery subsystem 00:12:21.917 treq: not required 00:12:21.917 portid: 0 00:12:21.917 trsvcid: 4420 00:12:21.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.917 traddr: 10.0.0.2 00:12:21.917 eflags: explicit discovery connections, duplicate discovery information 00:12:21.917 sectype: none 00:12:21.917 =====Discovery Log Entry 1====== 00:12:21.917 trtype: tcp 00:12:21.917 adrfam: ipv4 00:12:21.917 subtype: nvme subsystem 00:12:21.917 treq: not required 00:12:21.917 portid: 0 00:12:21.917 trsvcid: 4420 00:12:21.917 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:21.917 traddr: 10.0.0.2 00:12:21.917 eflags: none 00:12:21.917 sectype: none 00:12:21.917 02:31:55 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:21.917 02:31:55 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:21.917 02:31:55 -- nvmf/common.sh@511 -- # local dev _ 00:12:21.917 02:31:55 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:21.917 02:31:55 -- nvmf/common.sh@510 -- # nvme list 00:12:21.917 02:31:55 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:21.917 02:31:55 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:21.917 02:31:55 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.917 02:31:55 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:21.917 02:31:55 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:21.917 02:31:55 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.304 02:31:56 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:23.304 02:31:56 -- common/autotest_common.sh@1184 -- # local i=0 00:12:23.304 02:31:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.304 02:31:56 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:23.304 02:31:56 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:23.304 02:31:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:25.864 02:31:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:25.864 02:31:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:25.864 02:31:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.864 02:31:58 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:25.864 02:31:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.864 02:31:58 -- common/autotest_common.sh@1194 -- # return 0 00:12:25.864 02:31:58 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:25.864 02:31:58 -- nvmf/common.sh@511 -- # local dev _ 00:12:25.864 02:31:58 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:58 -- nvmf/common.sh@510 -- # nvme list 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:25.864 /dev/nvme0n1 ]] 00:12:25.864 02:31:59 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:25.864 02:31:59 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:25.864 02:31:59 -- nvmf/common.sh@511 -- # local dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@510 -- # nvme list 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.864 02:31:59 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:25.864 02:31:59 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:25.864 02:31:59 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:25.864 02:31:59 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.125 02:31:59 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.125 02:31:59 -- common/autotest_common.sh@1205 -- # local i=0 00:12:26.125 02:31:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:26.125 02:31:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.125 02:31:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:26.125 02:31:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.125 02:31:59 -- common/autotest_common.sh@1217 -- # return 0 00:12:26.125 02:31:59 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:26.125 02:31:59 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.125 02:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.125 02:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:26.125 02:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.125 02:31:59 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:26.125 02:31:59 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:26.125 02:31:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:26.125 02:31:59 -- nvmf/common.sh@117 -- # sync 00:12:26.125 02:31:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.125 02:31:59 -- nvmf/common.sh@120 -- # set +e 00:12:26.125 02:31:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.125 02:31:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.125 rmmod nvme_tcp 00:12:26.125 rmmod nvme_fabrics 00:12:26.125 rmmod nvme_keyring 00:12:26.125 02:31:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.125 02:31:59 -- nvmf/common.sh@124 -- # set -e 00:12:26.125 02:31:59 -- nvmf/common.sh@125 -- # return 0 00:12:26.125 02:31:59 -- nvmf/common.sh@478 -- # '[' -n 35248 ']' 00:12:26.125 02:31:59 -- nvmf/common.sh@479 -- # killprocess 35248 00:12:26.125 02:31:59 -- common/autotest_common.sh@936 -- # '[' -z 35248 ']' 00:12:26.125 02:31:59 -- common/autotest_common.sh@940 -- # kill -0 35248 00:12:26.125 02:31:59 -- common/autotest_common.sh@941 -- # uname 00:12:26.125 02:31:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.125 02:31:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 35248 00:12:26.125 02:31:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.125 02:31:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.125 02:31:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 35248' 00:12:26.125 killing process with pid 35248 00:12:26.125 02:31:59 -- common/autotest_common.sh@955 -- # kill 35248 00:12:26.125 02:31:59 -- common/autotest_common.sh@960 -- # wait 35248 00:12:26.387 02:31:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:26.387 02:31:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:26.387 02:31:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:26.387 02:31:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.387 02:31:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.387 02:31:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.387 02:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.387 02:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.301 02:32:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.301 00:12:28.301 real 0m14.691s 00:12:28.301 user 0m23.076s 00:12:28.301 sys 0m5.718s 00:12:28.301 02:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:28.301 02:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:28.301 ************************************ 00:12:28.301 END TEST nvmf_nvme_cli 00:12:28.301 ************************************ 00:12:28.562 02:32:01 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:28.562 02:32:01 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.562 02:32:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:28.562 02:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.562 02:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:28.562 ************************************ 00:12:28.562 START TEST nvmf_vfio_user 00:12:28.562 ************************************ 00:12:28.562 02:32:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.562 * Looking for test storage... 00:12:28.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.562 02:32:02 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.562 02:32:02 -- nvmf/common.sh@7 -- # uname -s 00:12:28.824 02:32:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.824 02:32:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.824 02:32:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.824 02:32:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.824 02:32:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.824 02:32:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.824 02:32:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.824 02:32:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.824 02:32:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.824 02:32:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.824 02:32:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.824 02:32:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.824 02:32:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.824 02:32:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.824 02:32:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.824 02:32:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.824 02:32:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.824 02:32:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.824 02:32:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.824 02:32:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.824 02:32:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.824 02:32:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.824 02:32:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.824 02:32:02 -- paths/export.sh@5 -- # export PATH 00:12:28.824 02:32:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.824 02:32:02 -- nvmf/common.sh@47 -- # : 0 00:12:28.824 02:32:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.824 02:32:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.824 02:32:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.824 02:32:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.824 02:32:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.824 02:32:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.824 02:32:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.824 02:32:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=37104 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 37104' 00:12:28.824 Process pid: 37104 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 37104 00:12:28.824 02:32:02 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:28.824 02:32:02 -- common/autotest_common.sh@817 -- # '[' -z 37104 ']' 00:12:28.824 02:32:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.824 02:32:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:28.824 02:32:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.824 02:32:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:28.824 02:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.824 [2024-04-27 02:32:02.269818] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:28.824 [2024-04-27 02:32:02.269883] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.824 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.824 [2024-04-27 02:32:02.337543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.824 [2024-04-27 02:32:02.412404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.824 [2024-04-27 02:32:02.412443] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.824 [2024-04-27 02:32:02.412452] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.824 [2024-04-27 02:32:02.412460] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.824 [2024-04-27 02:32:02.412466] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.824 [2024-04-27 02:32:02.412595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.824 [2024-04-27 02:32:02.412740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.824 [2024-04-27 02:32:02.412875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.824 [2024-04-27 02:32:02.412878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.768 02:32:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:29.768 02:32:03 -- common/autotest_common.sh@850 -- # return 0 00:12:29.768 02:32:03 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:30.711 02:32:04 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:30.711 02:32:04 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:30.711 02:32:04 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:30.711 02:32:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.711 02:32:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:30.711 02:32:04 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.972 Malloc1 00:12:30.972 02:32:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:30.972 02:32:04 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:31.234 02:32:04 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:31.495 02:32:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.495 02:32:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:31.495 02:32:04 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:31.495 Malloc2 00:12:31.495 02:32:05 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:31.756 02:32:05 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.017 02:32:05 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:32.017 [2024-04-27 02:32:05.622453] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:32.017 [2024-04-27 02:32:05.622498] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid37929 ] 00:12:32.017 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.281 [2024-04-27 02:32:05.652915] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:32.281 [2024-04-27 02:32:05.661569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.281 [2024-04-27 02:32:05.661588] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f09d1612000 00:12:32.281 [2024-04-27 02:32:05.662570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.663573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.664585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.665589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.666600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.667596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.668613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.669611] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.281 [2024-04-27 02:32:05.670622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.281 [2024-04-27 02:32:05.670634] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f09d1607000 00:12:32.282 [2024-04-27 02:32:05.671961] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.282 [2024-04-27 02:32:05.688905] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:32.282 [2024-04-27 02:32:05.688927] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:32.282 [2024-04-27 02:32:05.693810] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.282 [2024-04-27 02:32:05.693852] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:32.282 [2024-04-27 02:32:05.693938] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:32.282 [2024-04-27 02:32:05.693956] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:32.282 [2024-04-27 02:32:05.693962] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:32.282 [2024-04-27 02:32:05.694809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:32.282 [2024-04-27 02:32:05.694819] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:32.282 [2024-04-27 02:32:05.694830] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:32.282 [2024-04-27 02:32:05.695813] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.282 [2024-04-27 02:32:05.695821] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:32.282 [2024-04-27 02:32:05.695828] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:32.282 [2024-04-27 02:32:05.696827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:32.282 [2024-04-27 02:32:05.696835] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:32.282 [2024-04-27 02:32:05.697831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:32.282 [2024-04-27 02:32:05.697839] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:32.282 [2024-04-27 02:32:05.697844] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:32.282 [2024-04-27 02:32:05.697850] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:32.282 [2024-04-27 02:32:05.697956] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:32.282 [2024-04-27 02:32:05.697961] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:32.282 [2024-04-27 02:32:05.697966] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:32.282 [2024-04-27 02:32:05.698849] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:32.282 [2024-04-27 02:32:05.699841] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:32.282 [2024-04-27 02:32:05.700861] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.282 [2024-04-27 02:32:05.701853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.282 [2024-04-27 02:32:05.701925] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:32.282 [2024-04-27 02:32:05.702875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:32.282 [2024-04-27 02:32:05.702883] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:32.282 [2024-04-27 02:32:05.702888] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.702909] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:32.282 [2024-04-27 02:32:05.702917] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.702932] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.282 [2024-04-27 02:32:05.702939] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.282 [2024-04-27 02:32:05.702952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.282 [2024-04-27 02:32:05.702993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:32.282 [2024-04-27 02:32:05.703002] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:32.282 [2024-04-27 02:32:05.703007] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:32.282 [2024-04-27 02:32:05.703011] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:32.282 [2024-04-27 02:32:05.703016] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:32.282 [2024-04-27 02:32:05.703020] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:32.282 [2024-04-27 02:32:05.703025] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:32.282 [2024-04-27 02:32:05.703029] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703037] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:32.282 [2024-04-27 02:32:05.703058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:32.282 [2024-04-27 02:32:05.703070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.282 [2024-04-27 02:32:05.703079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.282 [2024-04-27 02:32:05.703087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.282 [2024-04-27 02:32:05.703095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.282 [2024-04-27 02:32:05.703100] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:32.282 [2024-04-27 02:32:05.703129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:32.282 [2024-04-27 02:32:05.703134] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:32.282 [2024-04-27 02:32:05.703139] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703149] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.282 [2024-04-27 02:32:05.703176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:32.282 [2024-04-27 02:32:05.703224] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703232] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703240] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:32.282 [2024-04-27 02:32:05.703244] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:32.282 [2024-04-27 02:32:05.703250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:32.282 [2024-04-27 02:32:05.703265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:32.282 [2024-04-27 02:32:05.703273] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:32.282 [2024-04-27 02:32:05.703291] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703299] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703305] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.282 [2024-04-27 02:32:05.703310] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.282 [2024-04-27 02:32:05.703315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.282 [2024-04-27 02:32:05.703334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:32.282 [2024-04-27 02:32:05.703346] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:32.282 [2024-04-27 02:32:05.703360] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.282 [2024-04-27 02:32:05.703365] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.283 [2024-04-27 02:32:05.703370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:32.283 [2024-04-27 02:32:05.703398] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:32.283 [2024-04-27 02:32:05.703405] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:32.283 [2024-04-27 02:32:05.703411] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:32.283 [2024-04-27 02:32:05.703416] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:32.283 [2024-04-27 02:32:05.703423] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:32.283 [2024-04-27 02:32:05.703427] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:32.283 [2024-04-27 02:32:05.703432] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:32.283 [2024-04-27 02:32:05.703450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703538] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:32.283 [2024-04-27 02:32:05.703542] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:32.283 [2024-04-27 02:32:05.703546] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:32.283 [2024-04-27 02:32:05.703549] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:32.283 [2024-04-27 02:32:05.703555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:32.283 [2024-04-27 02:32:05.703563] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:32.283 [2024-04-27 02:32:05.703567] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:32.283 [2024-04-27 02:32:05.703573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703580] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:32.283 [2024-04-27 02:32:05.703584] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.283 [2024-04-27 02:32:05.703590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703597] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:32.283 [2024-04-27 02:32:05.703602] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:32.283 [2024-04-27 02:32:05.703607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:32.283 [2024-04-27 02:32:05.703614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:32.283 [2024-04-27 02:32:05.703644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:32.283 ===================================================== 00:12:32.283 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.283 ===================================================== 00:12:32.283 Controller Capabilities/Features 00:12:32.283 ================================ 00:12:32.283 Vendor ID: 4e58 00:12:32.283 Subsystem Vendor ID: 4e58 00:12:32.283 Serial Number: SPDK1 00:12:32.283 Model Number: SPDK bdev Controller 00:12:32.283 Firmware Version: 24.05 00:12:32.283 Recommended Arb Burst: 6 00:12:32.283 IEEE OUI Identifier: 8d 6b 50 00:12:32.283 Multi-path I/O 00:12:32.283 May have multiple subsystem ports: Yes 00:12:32.283 May have multiple controllers: Yes 00:12:32.283 Associated with SR-IOV VF: No 00:12:32.283 Max Data Transfer Size: 131072 00:12:32.283 Max Number of Namespaces: 32 00:12:32.283 Max Number of I/O Queues: 127 00:12:32.283 NVMe Specification Version (VS): 1.3 00:12:32.283 NVMe Specification Version (Identify): 1.3 00:12:32.283 Maximum Queue Entries: 256 00:12:32.283 Contiguous Queues Required: Yes 00:12:32.283 Arbitration Mechanisms Supported 00:12:32.283 Weighted Round Robin: Not Supported 00:12:32.283 Vendor Specific: Not Supported 00:12:32.283 Reset Timeout: 15000 ms 00:12:32.283 Doorbell Stride: 4 bytes 00:12:32.283 NVM Subsystem Reset: Not Supported 00:12:32.283 Command Sets Supported 00:12:32.283 NVM Command Set: Supported 00:12:32.283 Boot Partition: Not Supported 00:12:32.283 Memory Page Size Minimum: 4096 bytes 00:12:32.283 Memory Page Size Maximum: 4096 bytes 00:12:32.283 Persistent Memory Region: Not Supported 00:12:32.283 Optional Asynchronous Events Supported 00:12:32.283 Namespace Attribute Notices: Supported 00:12:32.283 Firmware Activation Notices: Not Supported 00:12:32.283 ANA Change Notices: Not Supported 00:12:32.283 PLE Aggregate Log Change Notices: Not Supported 00:12:32.283 LBA Status Info Alert Notices: Not Supported 00:12:32.283 EGE Aggregate Log Change Notices: Not Supported 00:12:32.283 Normal NVM Subsystem Shutdown event: Not Supported 00:12:32.283 Zone Descriptor Change Notices: Not Supported 00:12:32.283 Discovery Log Change Notices: Not Supported 00:12:32.283 Controller Attributes 00:12:32.283 128-bit Host Identifier: Supported 00:12:32.283 Non-Operational Permissive Mode: Not Supported 00:12:32.283 NVM Sets: Not Supported 00:12:32.283 Read Recovery Levels: Not Supported 00:12:32.283 Endurance Groups: Not Supported 00:12:32.283 Predictable Latency Mode: Not Supported 00:12:32.283 Traffic Based Keep ALive: Not Supported 00:12:32.283 Namespace Granularity: Not Supported 00:12:32.283 SQ Associations: Not Supported 00:12:32.283 UUID List: Not Supported 00:12:32.283 Multi-Domain Subsystem: Not Supported 00:12:32.283 Fixed Capacity Management: Not Supported 00:12:32.283 Variable Capacity Management: Not Supported 00:12:32.283 Delete Endurance Group: Not Supported 00:12:32.283 Delete NVM Set: Not Supported 00:12:32.283 Extended LBA Formats Supported: Not Supported 00:12:32.283 Flexible Data Placement Supported: Not Supported 00:12:32.283 00:12:32.283 Controller Memory Buffer Support 00:12:32.283 ================================ 00:12:32.283 Supported: No 00:12:32.283 00:12:32.283 Persistent Memory Region Support 00:12:32.283 ================================ 00:12:32.283 Supported: No 00:12:32.283 00:12:32.283 Admin Command Set Attributes 00:12:32.283 ============================ 00:12:32.284 Security Send/Receive: Not Supported 00:12:32.284 Format NVM: Not Supported 00:12:32.284 Firmware Activate/Download: Not Supported 00:12:32.284 Namespace Management: Not Supported 00:12:32.284 Device Self-Test: Not Supported 00:12:32.284 Directives: Not Supported 00:12:32.284 NVMe-MI: Not Supported 00:12:32.284 Virtualization Management: Not Supported 00:12:32.284 Doorbell Buffer Config: Not Supported 00:12:32.284 Get LBA Status Capability: Not Supported 00:12:32.284 Command & Feature Lockdown Capability: Not Supported 00:12:32.284 Abort Command Limit: 4 00:12:32.284 Async Event Request Limit: 4 00:12:32.284 Number of Firmware Slots: N/A 00:12:32.284 Firmware Slot 1 Read-Only: N/A 00:12:32.284 Firmware Activation Without Reset: N/A 00:12:32.284 Multiple Update Detection Support: N/A 00:12:32.284 Firmware Update Granularity: No Information Provided 00:12:32.284 Per-Namespace SMART Log: No 00:12:32.284 Asymmetric Namespace Access Log Page: Not Supported 00:12:32.284 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:32.284 Command Effects Log Page: Supported 00:12:32.284 Get Log Page Extended Data: Supported 00:12:32.284 Telemetry Log Pages: Not Supported 00:12:32.284 Persistent Event Log Pages: Not Supported 00:12:32.284 Supported Log Pages Log Page: May Support 00:12:32.284 Commands Supported & Effects Log Page: Not Supported 00:12:32.284 Feature Identifiers & Effects Log Page:May Support 00:12:32.284 NVMe-MI Commands & Effects Log Page: May Support 00:12:32.284 Data Area 4 for Telemetry Log: Not Supported 00:12:32.284 Error Log Page Entries Supported: 128 00:12:32.284 Keep Alive: Supported 00:12:32.284 Keep Alive Granularity: 10000 ms 00:12:32.284 00:12:32.284 NVM Command Set Attributes 00:12:32.284 ========================== 00:12:32.284 Submission Queue Entry Size 00:12:32.284 Max: 64 00:12:32.284 Min: 64 00:12:32.284 Completion Queue Entry Size 00:12:32.284 Max: 16 00:12:32.284 Min: 16 00:12:32.284 Number of Namespaces: 32 00:12:32.284 Compare Command: Supported 00:12:32.284 Write Uncorrectable Command: Not Supported 00:12:32.284 Dataset Management Command: Supported 00:12:32.284 Write Zeroes Command: Supported 00:12:32.284 Set Features Save Field: Not Supported 00:12:32.284 Reservations: Not Supported 00:12:32.284 Timestamp: Not Supported 00:12:32.284 Copy: Supported 00:12:32.284 Volatile Write Cache: Present 00:12:32.284 Atomic Write Unit (Normal): 1 00:12:32.284 Atomic Write Unit (PFail): 1 00:12:32.284 Atomic Compare & Write Unit: 1 00:12:32.284 Fused Compare & Write: Supported 00:12:32.284 Scatter-Gather List 00:12:32.284 SGL Command Set: Supported (Dword aligned) 00:12:32.284 SGL Keyed: Not Supported 00:12:32.284 SGL Bit Bucket Descriptor: Not Supported 00:12:32.284 SGL Metadata Pointer: Not Supported 00:12:32.284 Oversized SGL: Not Supported 00:12:32.284 SGL Metadata Address: Not Supported 00:12:32.284 SGL Offset: Not Supported 00:12:32.284 Transport SGL Data Block: Not Supported 00:12:32.284 Replay Protected Memory Block: Not Supported 00:12:32.284 00:12:32.284 Firmware Slot Information 00:12:32.284 ========================= 00:12:32.284 Active slot: 1 00:12:32.284 Slot 1 Firmware Revision: 24.05 00:12:32.284 00:12:32.284 00:12:32.284 Commands Supported and Effects 00:12:32.284 ============================== 00:12:32.284 Admin Commands 00:12:32.284 -------------- 00:12:32.284 Get Log Page (02h): Supported 00:12:32.284 Identify (06h): Supported 00:12:32.284 Abort (08h): Supported 00:12:32.284 Set Features (09h): Supported 00:12:32.284 Get Features (0Ah): Supported 00:12:32.284 Asynchronous Event Request (0Ch): Supported 00:12:32.284 Keep Alive (18h): Supported 00:12:32.284 I/O Commands 00:12:32.284 ------------ 00:12:32.284 Flush (00h): Supported LBA-Change 00:12:32.284 Write (01h): Supported LBA-Change 00:12:32.284 Read (02h): Supported 00:12:32.284 Compare (05h): Supported 00:12:32.284 Write Zeroes (08h): Supported LBA-Change 00:12:32.284 Dataset Management (09h): Supported LBA-Change 00:12:32.284 Copy (19h): Supported LBA-Change 00:12:32.284 Unknown (79h): Supported LBA-Change 00:12:32.284 Unknown (7Ah): Supported 00:12:32.284 00:12:32.284 Error Log 00:12:32.284 ========= 00:12:32.284 00:12:32.284 Arbitration 00:12:32.284 =========== 00:12:32.284 Arbitration Burst: 1 00:12:32.284 00:12:32.284 Power Management 00:12:32.284 ================ 00:12:32.284 Number of Power States: 1 00:12:32.284 Current Power State: Power State #0 00:12:32.284 Power State #0: 00:12:32.284 Max Power: 0.00 W 00:12:32.284 Non-Operational State: Operational 00:12:32.284 Entry Latency: Not Reported 00:12:32.284 Exit Latency: Not Reported 00:12:32.284 Relative Read Throughput: 0 00:12:32.284 Relative Read Latency: 0 00:12:32.284 Relative Write Throughput: 0 00:12:32.284 Relative Write Latency: 0 00:12:32.284 Idle Power: Not Reported 00:12:32.284 Active Power: Not Reported 00:12:32.284 Non-Operational Permissive Mode: Not Supported 00:12:32.284 00:12:32.284 Health Information 00:12:32.284 ================== 00:12:32.284 Critical Warnings: 00:12:32.284 Available Spare Space: OK 00:12:32.284 Temperature: OK 00:12:32.284 Device Reliability: OK 00:12:32.284 Read Only: No 00:12:32.284 Volatile Memory Backup: OK 00:12:32.284 Current Temperature: 0 Kelvin (-2[2024-04-27 02:32:05.703751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:32.284 [2024-04-27 02:32:05.703767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:32.284 [2024-04-27 02:32:05.703792] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:32.284 [2024-04-27 02:32:05.703801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.284 [2024-04-27 02:32:05.703807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.284 [2024-04-27 02:32:05.703813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.284 [2024-04-27 02:32:05.703820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.284 [2024-04-27 02:32:05.703888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.284 [2024-04-27 02:32:05.703898] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:32.284 [2024-04-27 02:32:05.704889] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.284 [2024-04-27 02:32:05.704938] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:32.284 [2024-04-27 02:32:05.704944] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:32.284 [2024-04-27 02:32:05.705895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:32.284 [2024-04-27 02:32:05.705906] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:32.284 [2024-04-27 02:32:05.705966] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:32.284 [2024-04-27 02:32:05.709287] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.284 73 Celsius) 00:12:32.284 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:32.285 Available Spare: 0% 00:12:32.285 Available Spare Threshold: 0% 00:12:32.285 Life Percentage Used: 0% 00:12:32.285 Data Units Read: 0 00:12:32.285 Data Units Written: 0 00:12:32.285 Host Read Commands: 0 00:12:32.285 Host Write Commands: 0 00:12:32.285 Controller Busy Time: 0 minutes 00:12:32.285 Power Cycles: 0 00:12:32.285 Power On Hours: 0 hours 00:12:32.285 Unsafe Shutdowns: 0 00:12:32.285 Unrecoverable Media Errors: 0 00:12:32.285 Lifetime Error Log Entries: 0 00:12:32.285 Warning Temperature Time: 0 minutes 00:12:32.285 Critical Temperature Time: 0 minutes 00:12:32.285 00:12:32.285 Number of Queues 00:12:32.285 ================ 00:12:32.285 Number of I/O Submission Queues: 127 00:12:32.285 Number of I/O Completion Queues: 127 00:12:32.285 00:12:32.285 Active Namespaces 00:12:32.285 ================= 00:12:32.285 Namespace ID:1 00:12:32.285 Error Recovery Timeout: Unlimited 00:12:32.285 Command Set Identifier: NVM (00h) 00:12:32.285 Deallocate: Supported 00:12:32.285 Deallocated/Unwritten Error: Not Supported 00:12:32.285 Deallocated Read Value: Unknown 00:12:32.285 Deallocate in Write Zeroes: Not Supported 00:12:32.285 Deallocated Guard Field: 0xFFFF 00:12:32.285 Flush: Supported 00:12:32.285 Reservation: Supported 00:12:32.285 Namespace Sharing Capabilities: Multiple Controllers 00:12:32.285 Size (in LBAs): 131072 (0GiB) 00:12:32.285 Capacity (in LBAs): 131072 (0GiB) 00:12:32.285 Utilization (in LBAs): 131072 (0GiB) 00:12:32.285 NGUID: 546E92D6053A49BAA65753E4E3E52DFF 00:12:32.285 UUID: 546e92d6-053a-49ba-a657-53e4e3e52dff 00:12:32.285 Thin Provisioning: Not Supported 00:12:32.285 Per-NS Atomic Units: Yes 00:12:32.285 Atomic Boundary Size (Normal): 0 00:12:32.285 Atomic Boundary Size (PFail): 0 00:12:32.285 Atomic Boundary Offset: 0 00:12:32.285 Maximum Single Source Range Length: 65535 00:12:32.285 Maximum Copy Length: 65535 00:12:32.285 Maximum Source Range Count: 1 00:12:32.285 NGUID/EUI64 Never Reused: No 00:12:32.285 Namespace Write Protected: No 00:12:32.285 Number of LBA Formats: 1 00:12:32.285 Current LBA Format: LBA Format #00 00:12:32.285 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:32.285 00:12:32.285 02:32:05 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:32.285 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.547 [2024-04-27 02:32:05.909956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.835 [2024-04-27 02:32:10.931743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.835 Initializing NVMe Controllers 00:12:37.835 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:37.835 Initialization complete. Launching workers. 00:12:37.835 ======================================================== 00:12:37.836 Latency(us) 00:12:37.836 Device Information : IOPS MiB/s Average min max 00:12:37.836 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34166.80 133.46 3747.16 1223.56 9145.20 00:12:37.836 ======================================================== 00:12:37.836 Total : 34166.80 133.46 3747.16 1223.56 9145.20 00:12:37.836 00:12:37.836 02:32:10 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:37.836 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.836 [2024-04-27 02:32:11.136751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.128 [2024-04-27 02:32:16.175640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.128 Initializing NVMe Controllers 00:12:43.128 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:43.128 Initialization complete. Launching workers. 00:12:43.128 ======================================================== 00:12:43.128 Latency(us) 00:12:43.128 Device Information : IOPS MiB/s Average min max 00:12:43.128 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.18 62.70 7985.34 7495.77 10977.02 00:12:43.128 ======================================================== 00:12:43.128 Total : 16051.18 62.70 7985.34 7495.77 10977.02 00:12:43.128 00:12:43.128 02:32:16 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:43.128 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.128 [2024-04-27 02:32:16.398661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.443 [2024-04-27 02:32:21.491528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.443 Initializing NVMe Controllers 00:12:48.443 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.443 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.443 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:48.443 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:48.443 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:48.443 Initialization complete. Launching workers. 00:12:48.443 Starting thread on core 2 00:12:48.443 Starting thread on core 3 00:12:48.443 Starting thread on core 1 00:12:48.443 02:32:21 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:48.443 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.443 [2024-04-27 02:32:21.749846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.747 [2024-04-27 02:32:25.029422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.748 Initializing NVMe Controllers 00:12:51.748 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.748 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.748 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:51.748 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:51.748 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:51.748 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:51.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:51.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:51.748 Initialization complete. Launching workers. 00:12:51.748 Starting thread on core 1 with urgent priority queue 00:12:51.748 Starting thread on core 2 with urgent priority queue 00:12:51.748 Starting thread on core 3 with urgent priority queue 00:12:51.748 Starting thread on core 0 with urgent priority queue 00:12:51.748 SPDK bdev Controller (SPDK1 ) core 0: 9699.00 IO/s 10.31 secs/100000 ios 00:12:51.748 SPDK bdev Controller (SPDK1 ) core 1: 8560.33 IO/s 11.68 secs/100000 ios 00:12:51.748 SPDK bdev Controller (SPDK1 ) core 2: 7332.33 IO/s 13.64 secs/100000 ios 00:12:51.748 SPDK bdev Controller (SPDK1 ) core 3: 9307.67 IO/s 10.74 secs/100000 ios 00:12:51.748 ======================================================== 00:12:51.748 00:12:51.748 02:32:25 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:51.748 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.748 [2024-04-27 02:32:25.290795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.748 [2024-04-27 02:32:25.325022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.748 Initializing NVMe Controllers 00:12:51.748 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.748 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.748 Namespace ID: 1 size: 0GB 00:12:51.748 Initialization complete. 00:12:51.748 INFO: using host memory buffer for IO 00:12:51.748 Hello world! 00:12:52.008 02:32:25 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:52.009 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.009 [2024-04-27 02:32:25.592760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.395 Initializing NVMe Controllers 00:12:53.395 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.395 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.395 Initialization complete. Launching workers. 00:12:53.395 submit (in ns) avg, min, max = 7356.5, 3875.8, 5995817.5 00:12:53.395 complete (in ns) avg, min, max = 17486.6, 2360.0, 5994804.2 00:12:53.395 00:12:53.395 Submit histogram 00:12:53.395 ================ 00:12:53.395 Range in us Cumulative Count 00:12:53.395 3.867 - 3.893: 1.1352% ( 225) 00:12:53.395 3.893 - 3.920: 6.8866% ( 1140) 00:12:53.395 3.920 - 3.947: 16.1193% ( 1830) 00:12:53.395 3.947 - 3.973: 27.6424% ( 2284) 00:12:53.395 3.973 - 4.000: 38.8628% ( 2224) 00:12:53.395 4.000 - 4.027: 49.9622% ( 2200) 00:12:53.395 4.027 - 4.053: 66.1420% ( 3207) 00:12:53.395 4.053 - 4.080: 81.0454% ( 2954) 00:12:53.395 4.080 - 4.107: 91.0499% ( 1983) 00:12:53.395 4.107 - 4.133: 96.4482% ( 1070) 00:12:53.395 4.133 - 4.160: 98.5268% ( 412) 00:12:53.395 4.160 - 4.187: 99.2836% ( 150) 00:12:53.395 4.187 - 4.213: 99.4955% ( 42) 00:12:53.395 4.213 - 4.240: 99.5510% ( 11) 00:12:53.395 4.240 - 4.267: 99.5560% ( 1) 00:12:53.395 4.427 - 4.453: 99.5611% ( 1) 00:12:53.395 4.453 - 4.480: 99.5661% ( 1) 00:12:53.395 4.507 - 4.533: 99.5712% ( 1) 00:12:53.395 4.720 - 4.747: 99.5762% ( 1) 00:12:53.395 4.907 - 4.933: 99.5813% ( 1) 00:12:53.395 5.013 - 5.040: 99.5863% ( 1) 00:12:53.395 5.147 - 5.173: 99.5913% ( 1) 00:12:53.395 5.227 - 5.253: 99.5964% ( 1) 00:12:53.395 5.253 - 5.280: 99.6014% ( 1) 00:12:53.395 5.387 - 5.413: 99.6065% ( 1) 00:12:53.396 5.413 - 5.440: 99.6115% ( 1) 00:12:53.396 5.440 - 5.467: 99.6166% ( 1) 00:12:53.396 5.760 - 5.787: 99.6216% ( 1) 00:12:53.396 5.920 - 5.947: 99.6267% ( 1) 00:12:53.396 6.027 - 6.053: 99.6317% ( 1) 00:12:53.396 6.053 - 6.080: 99.6367% ( 1) 00:12:53.396 6.080 - 6.107: 99.6418% ( 1) 00:12:53.396 6.133 - 6.160: 99.6468% ( 1) 00:12:53.396 6.213 - 6.240: 99.6519% ( 1) 00:12:53.396 6.240 - 6.267: 99.6569% ( 1) 00:12:53.396 6.373 - 6.400: 99.6670% ( 2) 00:12:53.396 6.427 - 6.453: 99.6721% ( 1) 00:12:53.396 6.453 - 6.480: 99.6771% ( 1) 00:12:53.396 6.533 - 6.560: 99.6872% ( 2) 00:12:53.396 6.560 - 6.587: 99.6922% ( 1) 00:12:53.396 6.640 - 6.667: 99.6973% ( 1) 00:12:53.396 6.667 - 6.693: 99.7023% ( 1) 00:12:53.396 6.880 - 6.933: 99.7074% ( 1) 00:12:53.396 6.933 - 6.987: 99.7175% ( 2) 00:12:53.396 6.987 - 7.040: 99.7225% ( 1) 00:12:53.396 7.200 - 7.253: 99.7326% ( 2) 00:12:53.396 7.253 - 7.307: 99.7377% ( 1) 00:12:53.396 7.307 - 7.360: 99.7427% ( 1) 00:12:53.396 7.360 - 7.413: 99.7578% ( 3) 00:12:53.396 7.413 - 7.467: 99.7629% ( 1) 00:12:53.396 7.520 - 7.573: 99.7679% ( 1) 00:12:53.396 7.627 - 7.680: 99.7730% ( 1) 00:12:53.396 7.680 - 7.733: 99.7831% ( 2) 00:12:53.396 7.733 - 7.787: 99.7881% ( 1) 00:12:53.396 7.787 - 7.840: 99.7931% ( 1) 00:12:53.396 7.840 - 7.893: 99.7982% ( 1) 00:12:53.396 7.893 - 7.947: 99.8032% ( 1) 00:12:53.396 7.947 - 8.000: 99.8133% ( 2) 00:12:53.396 8.107 - 8.160: 99.8184% ( 1) 00:12:53.396 8.160 - 8.213: 99.8234% ( 1) 00:12:53.396 8.213 - 8.267: 99.8285% ( 1) 00:12:53.396 8.267 - 8.320: 99.8386% ( 2) 00:12:53.396 8.320 - 8.373: 99.8436% ( 1) 00:12:53.396 8.373 - 8.427: 99.8486% ( 1) 00:12:53.396 8.480 - 8.533: 99.8537% ( 1) 00:12:53.396 8.533 - 8.587: 99.8587% ( 1) 00:12:53.396 8.640 - 8.693: 99.8688% ( 2) 00:12:53.396 8.693 - 8.747: 99.8739% ( 1) 00:12:53.396 8.747 - 8.800: 99.8789% ( 1) 00:12:53.396 8.960 - 9.013: 99.8840% ( 1) 00:12:53.396 9.013 - 9.067: 99.8890% ( 1) 00:12:53.396 9.227 - 9.280: 99.8941% ( 1) 00:12:53.396 9.333 - 9.387: 99.9041% ( 2) 00:12:53.396 9.387 - 9.440: 99.9092% ( 1) 00:12:53.396 9.547 - 9.600: 99.9142% ( 1) 00:12:53.396 10.720 - 10.773: 99.9193% ( 1) 00:12:53.396 3986.773 - 4014.080: 99.9950% ( 15) 00:12:53.396 5980.160 - 6007.467: 100.0000% ( 1) 00:12:53.396 00:12:53.396 Complete histogram 00:12:53.396 ================== 00:12:53.396 Ra[2024-04-27 02:32:26.620205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.396 nge in us Cumulative Count 00:12:53.396 2.360 - 2.373: 2.3258% ( 461) 00:12:53.396 2.373 - 2.387: 2.5680% ( 48) 00:12:53.396 2.387 - 2.400: 2.9111% ( 68) 00:12:53.396 2.400 - 2.413: 2.9716% ( 12) 00:12:53.396 2.413 - 2.427: 42.8939% ( 7913) 00:12:53.396 2.427 - 2.440: 61.9898% ( 3785) 00:12:53.396 2.440 - 2.453: 71.9489% ( 1974) 00:12:53.396 2.453 - 2.467: 79.0979% ( 1417) 00:12:53.396 2.467 - 2.480: 81.9888% ( 573) 00:12:53.396 2.480 - 2.493: 84.3045% ( 459) 00:12:53.396 2.493 - 2.507: 90.5605% ( 1240) 00:12:53.396 2.507 - 2.520: 94.7984% ( 840) 00:12:53.396 2.520 - 2.533: 97.0082% ( 438) 00:12:53.396 2.533 - 2.547: 98.4461% ( 285) 00:12:53.396 2.547 - 2.560: 99.1020% ( 130) 00:12:53.396 2.560 - 2.573: 99.2987% ( 39) 00:12:53.396 2.573 - 2.587: 99.3340% ( 7) 00:12:53.396 4.640 - 4.667: 99.3391% ( 1) 00:12:53.396 4.667 - 4.693: 99.3542% ( 3) 00:12:53.396 4.693 - 4.720: 99.3593% ( 1) 00:12:53.396 4.747 - 4.773: 99.3694% ( 2) 00:12:53.396 4.800 - 4.827: 99.3744% ( 1) 00:12:53.396 4.853 - 4.880: 99.3794% ( 1) 00:12:53.396 4.880 - 4.907: 99.3845% ( 1) 00:12:53.396 4.907 - 4.933: 99.3895% ( 1) 00:12:53.396 4.960 - 4.987: 99.3946% ( 1) 00:12:53.396 5.040 - 5.067: 99.3996% ( 1) 00:12:53.396 5.067 - 5.093: 99.4047% ( 1) 00:12:53.396 5.120 - 5.147: 99.4097% ( 1) 00:12:53.396 5.227 - 5.253: 99.4198% ( 2) 00:12:53.396 5.253 - 5.280: 99.4249% ( 1) 00:12:53.396 5.467 - 5.493: 99.4299% ( 1) 00:12:53.396 5.600 - 5.627: 99.4400% ( 2) 00:12:53.396 5.707 - 5.733: 99.4450% ( 1) 00:12:53.396 5.893 - 5.920: 99.4551% ( 2) 00:12:53.396 5.973 - 6.000: 99.4602% ( 1) 00:12:53.396 6.000 - 6.027: 99.4703% ( 2) 00:12:53.396 6.133 - 6.160: 99.4753% ( 1) 00:12:53.396 6.187 - 6.213: 99.4854% ( 2) 00:12:53.396 6.213 - 6.240: 99.4904% ( 1) 00:12:53.396 6.240 - 6.267: 99.4955% ( 1) 00:12:53.396 6.373 - 6.400: 99.5005% ( 1) 00:12:53.396 6.400 - 6.427: 99.5157% ( 3) 00:12:53.396 6.453 - 6.480: 99.5207% ( 1) 00:12:53.396 6.507 - 6.533: 99.5258% ( 1) 00:12:53.396 6.533 - 6.560: 99.5308% ( 1) 00:12:53.396 6.587 - 6.613: 99.5358% ( 1) 00:12:53.396 6.693 - 6.720: 99.5409% ( 1) 00:12:53.396 6.880 - 6.933: 99.5510% ( 2) 00:12:53.396 6.987 - 7.040: 99.5611% ( 2) 00:12:53.396 7.040 - 7.093: 99.5661% ( 1) 00:12:53.396 7.253 - 7.307: 99.5712% ( 1) 00:12:53.396 7.307 - 7.360: 99.5762% ( 1) 00:12:53.396 7.360 - 7.413: 99.5813% ( 1) 00:12:53.396 7.627 - 7.680: 99.5863% ( 1) 00:12:53.396 7.733 - 7.787: 99.5913% ( 1) 00:12:53.396 7.840 - 7.893: 99.5964% ( 1) 00:12:53.396 10.933 - 10.987: 99.6014% ( 1) 00:12:53.396 12.053 - 12.107: 99.6065% ( 1) 00:12:53.396 13.227 - 13.280: 99.6115% ( 1) 00:12:53.396 16.640 - 16.747: 99.6166% ( 1) 00:12:53.396 43.733 - 43.947: 99.6216% ( 1) 00:12:53.396 164.693 - 165.547: 99.6267% ( 1) 00:12:53.396 2007.040 - 2020.693: 99.6367% ( 2) 00:12:53.396 2020.693 - 2034.347: 99.6418% ( 1) 00:12:53.396 3986.773 - 4014.080: 99.9798% ( 67) 00:12:53.396 5980.160 - 6007.467: 100.0000% ( 4) 00:12:53.396 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.396 [2024-04-27 02:32:26.807627] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:53.396 [ 00:12:53.396 { 00:12:53.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.396 "subtype": "Discovery", 00:12:53.396 "listen_addresses": [], 00:12:53.396 "allow_any_host": true, 00:12:53.396 "hosts": [] 00:12:53.396 }, 00:12:53.396 { 00:12:53.396 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.396 "subtype": "NVMe", 00:12:53.396 "listen_addresses": [ 00:12:53.396 { 00:12:53.396 "transport": "VFIOUSER", 00:12:53.396 "trtype": "VFIOUSER", 00:12:53.396 "adrfam": "IPv4", 00:12:53.396 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.396 "trsvcid": "0" 00:12:53.396 } 00:12:53.396 ], 00:12:53.396 "allow_any_host": true, 00:12:53.396 "hosts": [], 00:12:53.396 "serial_number": "SPDK1", 00:12:53.396 "model_number": "SPDK bdev Controller", 00:12:53.396 "max_namespaces": 32, 00:12:53.396 "min_cntlid": 1, 00:12:53.396 "max_cntlid": 65519, 00:12:53.396 "namespaces": [ 00:12:53.396 { 00:12:53.396 "nsid": 1, 00:12:53.396 "bdev_name": "Malloc1", 00:12:53.396 "name": "Malloc1", 00:12:53.396 "nguid": "546E92D6053A49BAA65753E4E3E52DFF", 00:12:53.396 "uuid": "546e92d6-053a-49ba-a657-53e4e3e52dff" 00:12:53.396 } 00:12:53.396 ] 00:12:53.396 }, 00:12:53.396 { 00:12:53.396 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.396 "subtype": "NVMe", 00:12:53.396 "listen_addresses": [ 00:12:53.396 { 00:12:53.396 "transport": "VFIOUSER", 00:12:53.396 "trtype": "VFIOUSER", 00:12:53.396 "adrfam": "IPv4", 00:12:53.396 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.396 "trsvcid": "0" 00:12:53.396 } 00:12:53.396 ], 00:12:53.396 "allow_any_host": true, 00:12:53.396 "hosts": [], 00:12:53.396 "serial_number": "SPDK2", 00:12:53.396 "model_number": "SPDK bdev Controller", 00:12:53.396 "max_namespaces": 32, 00:12:53.396 "min_cntlid": 1, 00:12:53.396 "max_cntlid": 65519, 00:12:53.396 "namespaces": [ 00:12:53.396 { 00:12:53.396 "nsid": 1, 00:12:53.396 "bdev_name": "Malloc2", 00:12:53.396 "name": "Malloc2", 00:12:53.396 "nguid": "1C3EB676D54246C79C976FE5C1E3A5FC", 00:12:53.396 "uuid": "1c3eb676-d542-46c7-9c97-6fe5c1e3a5fc" 00:12:53.396 } 00:12:53.396 ] 00:12:53.396 } 00:12:53.396 ] 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:53.396 02:32:26 -- target/nvmf_vfio_user.sh@34 -- # aerpid=43717 00:12:53.397 02:32:26 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:53.397 02:32:26 -- common/autotest_common.sh@1251 -- # local i=0 00:12:53.397 02:32:26 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:53.397 02:32:26 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.397 02:32:26 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.397 02:32:26 -- common/autotest_common.sh@1262 -- # return 0 00:12:53.397 02:32:26 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:53.397 02:32:26 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:53.397 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.397 Malloc3 00:12:53.397 [2024-04-27 02:32:27.008784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.658 02:32:27 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:53.658 [2024-04-27 02:32:27.171071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.658 02:32:27 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.658 Asynchronous Event Request test 00:12:53.658 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.658 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.658 Registering asynchronous event callbacks... 00:12:53.658 Starting namespace attribute notice tests for all controllers... 00:12:53.658 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:53.658 aer_cb - Changed Namespace 00:12:53.658 Cleaning up... 00:12:53.921 [ 00:12:53.921 { 00:12:53.921 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.921 "subtype": "Discovery", 00:12:53.921 "listen_addresses": [], 00:12:53.921 "allow_any_host": true, 00:12:53.921 "hosts": [] 00:12:53.921 }, 00:12:53.921 { 00:12:53.921 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.921 "subtype": "NVMe", 00:12:53.921 "listen_addresses": [ 00:12:53.921 { 00:12:53.921 "transport": "VFIOUSER", 00:12:53.921 "trtype": "VFIOUSER", 00:12:53.921 "adrfam": "IPv4", 00:12:53.921 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.921 "trsvcid": "0" 00:12:53.921 } 00:12:53.921 ], 00:12:53.921 "allow_any_host": true, 00:12:53.921 "hosts": [], 00:12:53.922 "serial_number": "SPDK1", 00:12:53.922 "model_number": "SPDK bdev Controller", 00:12:53.922 "max_namespaces": 32, 00:12:53.922 "min_cntlid": 1, 00:12:53.922 "max_cntlid": 65519, 00:12:53.922 "namespaces": [ 00:12:53.922 { 00:12:53.922 "nsid": 1, 00:12:53.922 "bdev_name": "Malloc1", 00:12:53.922 "name": "Malloc1", 00:12:53.922 "nguid": "546E92D6053A49BAA65753E4E3E52DFF", 00:12:53.922 "uuid": "546e92d6-053a-49ba-a657-53e4e3e52dff" 00:12:53.922 }, 00:12:53.922 { 00:12:53.922 "nsid": 2, 00:12:53.922 "bdev_name": "Malloc3", 00:12:53.922 "name": "Malloc3", 00:12:53.922 "nguid": "B789282DF66547F08CBDE2C5DCF049C4", 00:12:53.922 "uuid": "b789282d-f665-47f0-8cbd-e2c5dcf049c4" 00:12:53.922 } 00:12:53.922 ] 00:12:53.922 }, 00:12:53.922 { 00:12:53.922 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.922 "subtype": "NVMe", 00:12:53.922 "listen_addresses": [ 00:12:53.922 { 00:12:53.922 "transport": "VFIOUSER", 00:12:53.922 "trtype": "VFIOUSER", 00:12:53.922 "adrfam": "IPv4", 00:12:53.922 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.922 "trsvcid": "0" 00:12:53.922 } 00:12:53.922 ], 00:12:53.922 "allow_any_host": true, 00:12:53.922 "hosts": [], 00:12:53.922 "serial_number": "SPDK2", 00:12:53.922 "model_number": "SPDK bdev Controller", 00:12:53.922 "max_namespaces": 32, 00:12:53.922 "min_cntlid": 1, 00:12:53.922 "max_cntlid": 65519, 00:12:53.922 "namespaces": [ 00:12:53.922 { 00:12:53.922 "nsid": 1, 00:12:53.922 "bdev_name": "Malloc2", 00:12:53.922 "name": "Malloc2", 00:12:53.922 "nguid": "1C3EB676D54246C79C976FE5C1E3A5FC", 00:12:53.922 "uuid": "1c3eb676-d542-46c7-9c97-6fe5c1e3a5fc" 00:12:53.922 } 00:12:53.922 ] 00:12:53.922 } 00:12:53.922 ] 00:12:53.922 02:32:27 -- target/nvmf_vfio_user.sh@44 -- # wait 43717 00:12:53.922 02:32:27 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.922 02:32:27 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:53.922 02:32:27 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:53.922 02:32:27 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:53.922 [2024-04-27 02:32:27.391965] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:53.922 [2024-04-27 02:32:27.392013] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43940 ] 00:12:53.922 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.922 [2024-04-27 02:32:27.425821] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:53.922 [2024-04-27 02:32:27.433287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:53.922 [2024-04-27 02:32:27.433307] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efece565000 00:12:53.922 [2024-04-27 02:32:27.433490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.434500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.435503] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.436508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.437517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.438529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.439536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.440547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.922 [2024-04-27 02:32:27.441556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:53.922 [2024-04-27 02:32:27.441568] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efece55a000 00:12:53.922 [2024-04-27 02:32:27.442893] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:53.922 [2024-04-27 02:32:27.460222] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:53.922 [2024-04-27 02:32:27.460245] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:53.922 [2024-04-27 02:32:27.462313] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:53.922 [2024-04-27 02:32:27.462360] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:53.922 [2024-04-27 02:32:27.462440] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:53.922 [2024-04-27 02:32:27.462454] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:53.922 [2024-04-27 02:32:27.462460] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:53.922 [2024-04-27 02:32:27.465285] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:53.922 [2024-04-27 02:32:27.465295] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:53.922 [2024-04-27 02:32:27.465302] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:53.922 [2024-04-27 02:32:27.465344] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:53.922 [2024-04-27 02:32:27.465352] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:53.922 [2024-04-27 02:32:27.465360] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:53.922 [2024-04-27 02:32:27.466352] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:53.922 [2024-04-27 02:32:27.466361] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:53.922 [2024-04-27 02:32:27.467357] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:53.922 [2024-04-27 02:32:27.467366] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:53.922 [2024-04-27 02:32:27.467371] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:53.922 [2024-04-27 02:32:27.467378] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:53.922 [2024-04-27 02:32:27.467483] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:53.922 [2024-04-27 02:32:27.467488] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:53.922 [2024-04-27 02:32:27.467493] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:53.922 [2024-04-27 02:32:27.468364] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:53.922 [2024-04-27 02:32:27.469366] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:53.922 [2024-04-27 02:32:27.470375] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:53.922 [2024-04-27 02:32:27.471385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.922 [2024-04-27 02:32:27.471424] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:53.922 [2024-04-27 02:32:27.472396] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:53.922 [2024-04-27 02:32:27.472406] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:53.922 [2024-04-27 02:32:27.472410] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:53.922 [2024-04-27 02:32:27.472432] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:53.922 [2024-04-27 02:32:27.472439] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:53.922 [2024-04-27 02:32:27.472452] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:53.922 [2024-04-27 02:32:27.472457] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:53.922 [2024-04-27 02:32:27.472468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:53.922 [2024-04-27 02:32:27.480287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:53.922 [2024-04-27 02:32:27.480298] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:53.922 [2024-04-27 02:32:27.480303] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:53.922 [2024-04-27 02:32:27.480307] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:53.922 [2024-04-27 02:32:27.480312] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:53.922 [2024-04-27 02:32:27.480317] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:53.922 [2024-04-27 02:32:27.480321] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:53.923 [2024-04-27 02:32:27.480326] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.480334] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.480344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.488285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.488300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.923 [2024-04-27 02:32:27.488309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.923 [2024-04-27 02:32:27.488317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.923 [2024-04-27 02:32:27.488325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.923 [2024-04-27 02:32:27.488332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.488340] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.488349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.494286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.494295] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:53.923 [2024-04-27 02:32:27.494300] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.494309] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.494315] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.494323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.504286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.504336] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.504344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.504352] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:53.923 [2024-04-27 02:32:27.504356] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:53.923 [2024-04-27 02:32:27.504363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.512286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.512297] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:53.923 [2024-04-27 02:32:27.512309] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.512317] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.512324] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:53.923 [2024-04-27 02:32:27.512328] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:53.923 [2024-04-27 02:32:27.512334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.520284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.520298] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.520306] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.520315] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:53.923 [2024-04-27 02:32:27.520320] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:53.923 [2024-04-27 02:32:27.520326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.528283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.528293] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.528300] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.528308] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.528313] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.528318] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.528323] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:53.923 [2024-04-27 02:32:27.528328] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:53.923 [2024-04-27 02:32:27.528333] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:53.923 [2024-04-27 02:32:27.528349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:53.923 [2024-04-27 02:32:27.536283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:53.923 [2024-04-27 02:32:27.536297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:54.185 [2024-04-27 02:32:27.544283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:54.185 [2024-04-27 02:32:27.544297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:54.185 [2024-04-27 02:32:27.552286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:54.185 [2024-04-27 02:32:27.552302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.185 [2024-04-27 02:32:27.560285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:54.186 [2024-04-27 02:32:27.560299] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:54.186 [2024-04-27 02:32:27.560303] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:54.186 [2024-04-27 02:32:27.560307] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:54.186 [2024-04-27 02:32:27.560310] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:54.186 [2024-04-27 02:32:27.560316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:54.186 [2024-04-27 02:32:27.560324] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:54.186 [2024-04-27 02:32:27.560328] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:54.186 [2024-04-27 02:32:27.560340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:54.186 [2024-04-27 02:32:27.560347] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:54.186 [2024-04-27 02:32:27.560352] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.186 [2024-04-27 02:32:27.560357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.186 [2024-04-27 02:32:27.560365] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:54.186 [2024-04-27 02:32:27.560369] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:54.186 [2024-04-27 02:32:27.560375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:54.186 [2024-04-27 02:32:27.568285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:54.186 [2024-04-27 02:32:27.568302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:54.186 [2024-04-27 02:32:27.568311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:54.186 [2024-04-27 02:32:27.568318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:54.186 ===================================================== 00:12:54.186 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.186 ===================================================== 00:12:54.186 Controller Capabilities/Features 00:12:54.186 ================================ 00:12:54.186 Vendor ID: 4e58 00:12:54.186 Subsystem Vendor ID: 4e58 00:12:54.186 Serial Number: SPDK2 00:12:54.186 Model Number: SPDK bdev Controller 00:12:54.186 Firmware Version: 24.05 00:12:54.186 Recommended Arb Burst: 6 00:12:54.186 IEEE OUI Identifier: 8d 6b 50 00:12:54.186 Multi-path I/O 00:12:54.186 May have multiple subsystem ports: Yes 00:12:54.186 May have multiple controllers: Yes 00:12:54.186 Associated with SR-IOV VF: No 00:12:54.186 Max Data Transfer Size: 131072 00:12:54.186 Max Number of Namespaces: 32 00:12:54.186 Max Number of I/O Queues: 127 00:12:54.186 NVMe Specification Version (VS): 1.3 00:12:54.186 NVMe Specification Version (Identify): 1.3 00:12:54.186 Maximum Queue Entries: 256 00:12:54.186 Contiguous Queues Required: Yes 00:12:54.186 Arbitration Mechanisms Supported 00:12:54.186 Weighted Round Robin: Not Supported 00:12:54.186 Vendor Specific: Not Supported 00:12:54.186 Reset Timeout: 15000 ms 00:12:54.186 Doorbell Stride: 4 bytes 00:12:54.186 NVM Subsystem Reset: Not Supported 00:12:54.186 Command Sets Supported 00:12:54.186 NVM Command Set: Supported 00:12:54.186 Boot Partition: Not Supported 00:12:54.186 Memory Page Size Minimum: 4096 bytes 00:12:54.186 Memory Page Size Maximum: 4096 bytes 00:12:54.186 Persistent Memory Region: Not Supported 00:12:54.186 Optional Asynchronous Events Supported 00:12:54.186 Namespace Attribute Notices: Supported 00:12:54.186 Firmware Activation Notices: Not Supported 00:12:54.186 ANA Change Notices: Not Supported 00:12:54.186 PLE Aggregate Log Change Notices: Not Supported 00:12:54.186 LBA Status Info Alert Notices: Not Supported 00:12:54.186 EGE Aggregate Log Change Notices: Not Supported 00:12:54.186 Normal NVM Subsystem Shutdown event: Not Supported 00:12:54.186 Zone Descriptor Change Notices: Not Supported 00:12:54.186 Discovery Log Change Notices: Not Supported 00:12:54.186 Controller Attributes 00:12:54.186 128-bit Host Identifier: Supported 00:12:54.186 Non-Operational Permissive Mode: Not Supported 00:12:54.186 NVM Sets: Not Supported 00:12:54.186 Read Recovery Levels: Not Supported 00:12:54.186 Endurance Groups: Not Supported 00:12:54.186 Predictable Latency Mode: Not Supported 00:12:54.186 Traffic Based Keep ALive: Not Supported 00:12:54.186 Namespace Granularity: Not Supported 00:12:54.186 SQ Associations: Not Supported 00:12:54.186 UUID List: Not Supported 00:12:54.186 Multi-Domain Subsystem: Not Supported 00:12:54.186 Fixed Capacity Management: Not Supported 00:12:54.186 Variable Capacity Management: Not Supported 00:12:54.186 Delete Endurance Group: Not Supported 00:12:54.186 Delete NVM Set: Not Supported 00:12:54.186 Extended LBA Formats Supported: Not Supported 00:12:54.186 Flexible Data Placement Supported: Not Supported 00:12:54.186 00:12:54.186 Controller Memory Buffer Support 00:12:54.186 ================================ 00:12:54.186 Supported: No 00:12:54.186 00:12:54.186 Persistent Memory Region Support 00:12:54.186 ================================ 00:12:54.186 Supported: No 00:12:54.186 00:12:54.186 Admin Command Set Attributes 00:12:54.186 ============================ 00:12:54.186 Security Send/Receive: Not Supported 00:12:54.186 Format NVM: Not Supported 00:12:54.186 Firmware Activate/Download: Not Supported 00:12:54.186 Namespace Management: Not Supported 00:12:54.186 Device Self-Test: Not Supported 00:12:54.186 Directives: Not Supported 00:12:54.186 NVMe-MI: Not Supported 00:12:54.186 Virtualization Management: Not Supported 00:12:54.186 Doorbell Buffer Config: Not Supported 00:12:54.186 Get LBA Status Capability: Not Supported 00:12:54.186 Command & Feature Lockdown Capability: Not Supported 00:12:54.186 Abort Command Limit: 4 00:12:54.186 Async Event Request Limit: 4 00:12:54.186 Number of Firmware Slots: N/A 00:12:54.186 Firmware Slot 1 Read-Only: N/A 00:12:54.186 Firmware Activation Without Reset: N/A 00:12:54.186 Multiple Update Detection Support: N/A 00:12:54.186 Firmware Update Granularity: No Information Provided 00:12:54.186 Per-Namespace SMART Log: No 00:12:54.186 Asymmetric Namespace Access Log Page: Not Supported 00:12:54.186 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:54.186 Command Effects Log Page: Supported 00:12:54.186 Get Log Page Extended Data: Supported 00:12:54.186 Telemetry Log Pages: Not Supported 00:12:54.186 Persistent Event Log Pages: Not Supported 00:12:54.186 Supported Log Pages Log Page: May Support 00:12:54.186 Commands Supported & Effects Log Page: Not Supported 00:12:54.186 Feature Identifiers & Effects Log Page:May Support 00:12:54.186 NVMe-MI Commands & Effects Log Page: May Support 00:12:54.186 Data Area 4 for Telemetry Log: Not Supported 00:12:54.186 Error Log Page Entries Supported: 128 00:12:54.186 Keep Alive: Supported 00:12:54.186 Keep Alive Granularity: 10000 ms 00:12:54.186 00:12:54.186 NVM Command Set Attributes 00:12:54.186 ========================== 00:12:54.186 Submission Queue Entry Size 00:12:54.186 Max: 64 00:12:54.186 Min: 64 00:12:54.186 Completion Queue Entry Size 00:12:54.186 Max: 16 00:12:54.186 Min: 16 00:12:54.186 Number of Namespaces: 32 00:12:54.186 Compare Command: Supported 00:12:54.186 Write Uncorrectable Command: Not Supported 00:12:54.186 Dataset Management Command: Supported 00:12:54.186 Write Zeroes Command: Supported 00:12:54.186 Set Features Save Field: Not Supported 00:12:54.186 Reservations: Not Supported 00:12:54.186 Timestamp: Not Supported 00:12:54.186 Copy: Supported 00:12:54.186 Volatile Write Cache: Present 00:12:54.186 Atomic Write Unit (Normal): 1 00:12:54.186 Atomic Write Unit (PFail): 1 00:12:54.186 Atomic Compare & Write Unit: 1 00:12:54.186 Fused Compare & Write: Supported 00:12:54.186 Scatter-Gather List 00:12:54.186 SGL Command Set: Supported (Dword aligned) 00:12:54.186 SGL Keyed: Not Supported 00:12:54.186 SGL Bit Bucket Descriptor: Not Supported 00:12:54.186 SGL Metadata Pointer: Not Supported 00:12:54.186 Oversized SGL: Not Supported 00:12:54.186 SGL Metadata Address: Not Supported 00:12:54.186 SGL Offset: Not Supported 00:12:54.186 Transport SGL Data Block: Not Supported 00:12:54.186 Replay Protected Memory Block: Not Supported 00:12:54.186 00:12:54.186 Firmware Slot Information 00:12:54.186 ========================= 00:12:54.186 Active slot: 1 00:12:54.186 Slot 1 Firmware Revision: 24.05 00:12:54.186 00:12:54.186 00:12:54.186 Commands Supported and Effects 00:12:54.186 ============================== 00:12:54.186 Admin Commands 00:12:54.186 -------------- 00:12:54.186 Get Log Page (02h): Supported 00:12:54.186 Identify (06h): Supported 00:12:54.186 Abort (08h): Supported 00:12:54.186 Set Features (09h): Supported 00:12:54.186 Get Features (0Ah): Supported 00:12:54.186 Asynchronous Event Request (0Ch): Supported 00:12:54.186 Keep Alive (18h): Supported 00:12:54.187 I/O Commands 00:12:54.187 ------------ 00:12:54.187 Flush (00h): Supported LBA-Change 00:12:54.187 Write (01h): Supported LBA-Change 00:12:54.187 Read (02h): Supported 00:12:54.187 Compare (05h): Supported 00:12:54.187 Write Zeroes (08h): Supported LBA-Change 00:12:54.187 Dataset Management (09h): Supported LBA-Change 00:12:54.187 Copy (19h): Supported LBA-Change 00:12:54.187 Unknown (79h): Supported LBA-Change 00:12:54.187 Unknown (7Ah): Supported 00:12:54.187 00:12:54.187 Error Log 00:12:54.187 ========= 00:12:54.187 00:12:54.187 Arbitration 00:12:54.187 =========== 00:12:54.187 Arbitration Burst: 1 00:12:54.187 00:12:54.187 Power Management 00:12:54.187 ================ 00:12:54.187 Number of Power States: 1 00:12:54.187 Current Power State: Power State #0 00:12:54.187 Power State #0: 00:12:54.187 Max Power: 0.00 W 00:12:54.187 Non-Operational State: Operational 00:12:54.187 Entry Latency: Not Reported 00:12:54.187 Exit Latency: Not Reported 00:12:54.187 Relative Read Throughput: 0 00:12:54.187 Relative Read Latency: 0 00:12:54.187 Relative Write Throughput: 0 00:12:54.187 Relative Write Latency: 0 00:12:54.187 Idle Power: Not Reported 00:12:54.187 Active Power: Not Reported 00:12:54.187 Non-Operational Permissive Mode: Not Supported 00:12:54.187 00:12:54.187 Health Information 00:12:54.187 ================== 00:12:54.187 Critical Warnings: 00:12:54.187 Available Spare Space: OK 00:12:54.187 Temperature: OK 00:12:54.187 Device Reliability: OK 00:12:54.187 Read Only: No 00:12:54.187 Volatile Memory Backup: OK 00:12:54.187 Current Temperature: 0 Kelvin (-2[2024-04-27 02:32:27.568419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:54.187 [2024-04-27 02:32:27.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:54.187 [2024-04-27 02:32:27.576313] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:54.187 [2024-04-27 02:32:27.576322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.187 [2024-04-27 02:32:27.576328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.187 [2024-04-27 02:32:27.576335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.187 [2024-04-27 02:32:27.576341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.187 [2024-04-27 02:32:27.576392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:54.187 [2024-04-27 02:32:27.576403] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:54.187 [2024-04-27 02:32:27.577397] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.187 [2024-04-27 02:32:27.577447] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:54.187 [2024-04-27 02:32:27.577454] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:54.187 [2024-04-27 02:32:27.578404] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:54.187 [2024-04-27 02:32:27.578416] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:54.187 [2024-04-27 02:32:27.578463] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:54.187 [2024-04-27 02:32:27.581284] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.187 73 Celsius) 00:12:54.187 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:54.187 Available Spare: 0% 00:12:54.187 Available Spare Threshold: 0% 00:12:54.187 Life Percentage Used: 0% 00:12:54.187 Data Units Read: 0 00:12:54.187 Data Units Written: 0 00:12:54.187 Host Read Commands: 0 00:12:54.187 Host Write Commands: 0 00:12:54.187 Controller Busy Time: 0 minutes 00:12:54.187 Power Cycles: 0 00:12:54.187 Power On Hours: 0 hours 00:12:54.187 Unsafe Shutdowns: 0 00:12:54.187 Unrecoverable Media Errors: 0 00:12:54.187 Lifetime Error Log Entries: 0 00:12:54.187 Warning Temperature Time: 0 minutes 00:12:54.187 Critical Temperature Time: 0 minutes 00:12:54.187 00:12:54.187 Number of Queues 00:12:54.187 ================ 00:12:54.187 Number of I/O Submission Queues: 127 00:12:54.187 Number of I/O Completion Queues: 127 00:12:54.187 00:12:54.187 Active Namespaces 00:12:54.187 ================= 00:12:54.187 Namespace ID:1 00:12:54.187 Error Recovery Timeout: Unlimited 00:12:54.187 Command Set Identifier: NVM (00h) 00:12:54.187 Deallocate: Supported 00:12:54.187 Deallocated/Unwritten Error: Not Supported 00:12:54.187 Deallocated Read Value: Unknown 00:12:54.187 Deallocate in Write Zeroes: Not Supported 00:12:54.187 Deallocated Guard Field: 0xFFFF 00:12:54.187 Flush: Supported 00:12:54.187 Reservation: Supported 00:12:54.187 Namespace Sharing Capabilities: Multiple Controllers 00:12:54.187 Size (in LBAs): 131072 (0GiB) 00:12:54.187 Capacity (in LBAs): 131072 (0GiB) 00:12:54.187 Utilization (in LBAs): 131072 (0GiB) 00:12:54.187 NGUID: 1C3EB676D54246C79C976FE5C1E3A5FC 00:12:54.187 UUID: 1c3eb676-d542-46c7-9c97-6fe5c1e3a5fc 00:12:54.187 Thin Provisioning: Not Supported 00:12:54.187 Per-NS Atomic Units: Yes 00:12:54.187 Atomic Boundary Size (Normal): 0 00:12:54.187 Atomic Boundary Size (PFail): 0 00:12:54.187 Atomic Boundary Offset: 0 00:12:54.187 Maximum Single Source Range Length: 65535 00:12:54.187 Maximum Copy Length: 65535 00:12:54.187 Maximum Source Range Count: 1 00:12:54.187 NGUID/EUI64 Never Reused: No 00:12:54.187 Namespace Write Protected: No 00:12:54.187 Number of LBA Formats: 1 00:12:54.187 Current LBA Format: LBA Format #00 00:12:54.187 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:54.187 00:12:54.187 02:32:27 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:54.187 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.187 [2024-04-27 02:32:27.781663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.480 [2024-04-27 02:32:32.884478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.480 Initializing NVMe Controllers 00:12:59.480 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.480 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:59.480 Initialization complete. Launching workers. 00:12:59.480 ======================================================== 00:12:59.480 Latency(us) 00:12:59.480 Device Information : IOPS MiB/s Average min max 00:12:59.480 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44098.63 172.26 2902.07 912.76 5841.06 00:12:59.480 ======================================================== 00:12:59.480 Total : 44098.63 172.26 2902.07 912.76 5841.06 00:12:59.480 00:12:59.480 02:32:32 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:59.480 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.480 [2024-04-27 02:32:33.079144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.775 [2024-04-27 02:32:38.099202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.775 Initializing NVMe Controllers 00:13:04.775 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:04.775 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:04.775 Initialization complete. Launching workers. 00:13:04.775 ======================================================== 00:13:04.775 Latency(us) 00:13:04.775 Device Information : IOPS MiB/s Average min max 00:13:04.775 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33219.00 129.76 3853.74 1241.35 9635.35 00:13:04.775 ======================================================== 00:13:04.775 Total : 33219.00 129.76 3853.74 1241.35 9635.35 00:13:04.775 00:13:04.775 02:32:38 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:04.775 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.775 [2024-04-27 02:32:38.318751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.070 [2024-04-27 02:32:43.463372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.070 Initializing NVMe Controllers 00:13:10.070 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.070 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.070 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:10.070 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:10.070 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:10.070 Initialization complete. Launching workers. 00:13:10.070 Starting thread on core 2 00:13:10.070 Starting thread on core 3 00:13:10.070 Starting thread on core 1 00:13:10.070 02:32:43 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:10.070 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.330 [2024-04-27 02:32:43.729986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.632 [2024-04-27 02:32:46.783969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.632 Initializing NVMe Controllers 00:13:13.632 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.632 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:13.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:13.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:13.632 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:13.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:13.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:13.632 Initialization complete. Launching workers. 00:13:13.632 Starting thread on core 1 with urgent priority queue 00:13:13.632 Starting thread on core 2 with urgent priority queue 00:13:13.632 Starting thread on core 3 with urgent priority queue 00:13:13.632 Starting thread on core 0 with urgent priority queue 00:13:13.632 SPDK bdev Controller (SPDK2 ) core 0: 9785.67 IO/s 10.22 secs/100000 ios 00:13:13.632 SPDK bdev Controller (SPDK2 ) core 1: 8127.67 IO/s 12.30 secs/100000 ios 00:13:13.632 SPDK bdev Controller (SPDK2 ) core 2: 9143.33 IO/s 10.94 secs/100000 ios 00:13:13.632 SPDK bdev Controller (SPDK2 ) core 3: 8822.67 IO/s 11.33 secs/100000 ios 00:13:13.632 ======================================================== 00:13:13.632 00:13:13.632 02:32:46 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:13.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.632 [2024-04-27 02:32:47.046767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.632 [2024-04-27 02:32:47.056840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.632 Initializing NVMe Controllers 00:13:13.632 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.632 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.632 Namespace ID: 1 size: 0GB 00:13:13.632 Initialization complete. 00:13:13.632 INFO: using host memory buffer for IO 00:13:13.632 Hello world! 00:13:13.632 02:32:47 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:13.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.893 [2024-04-27 02:32:47.313256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.836 Initializing NVMe Controllers 00:13:14.836 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.836 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.837 Initialization complete. Launching workers. 00:13:14.837 submit (in ns) avg, min, max = 9648.1, 3874.2, 4007655.0 00:13:14.837 complete (in ns) avg, min, max = 20879.3, 2346.7, 4005737.5 00:13:14.837 00:13:14.837 Submit histogram 00:13:14.837 ================ 00:13:14.837 Range in us Cumulative Count 00:13:14.837 3.867 - 3.893: 1.5576% ( 233) 00:13:14.837 3.893 - 3.920: 7.6609% ( 913) 00:13:14.837 3.920 - 3.947: 16.6254% ( 1341) 00:13:14.837 3.947 - 3.973: 26.9269% ( 1541) 00:13:14.837 3.973 - 4.000: 37.2351% ( 1542) 00:13:14.837 4.000 - 4.027: 47.1622% ( 1485) 00:13:14.837 4.027 - 4.053: 63.5136% ( 2446) 00:13:14.837 4.053 - 4.080: 80.2661% ( 2506) 00:13:14.837 4.080 - 4.107: 92.0783% ( 1767) 00:13:14.837 4.107 - 4.133: 97.5801% ( 823) 00:13:14.837 4.133 - 4.160: 98.9505% ( 205) 00:13:14.837 4.160 - 4.187: 99.3449% ( 59) 00:13:14.837 4.187 - 4.213: 99.4184% ( 11) 00:13:14.837 4.213 - 4.240: 99.4318% ( 2) 00:13:14.837 4.373 - 4.400: 99.4385% ( 1) 00:13:14.837 4.453 - 4.480: 99.4518% ( 2) 00:13:14.837 4.667 - 4.693: 99.4585% ( 1) 00:13:14.837 4.693 - 4.720: 99.4652% ( 1) 00:13:14.837 4.880 - 4.907: 99.4719% ( 1) 00:13:14.837 5.173 - 5.200: 99.4786% ( 1) 00:13:14.837 5.973 - 6.000: 99.4853% ( 1) 00:13:14.837 6.000 - 6.027: 99.4919% ( 1) 00:13:14.837 6.027 - 6.053: 99.4986% ( 1) 00:13:14.837 6.080 - 6.107: 99.5120% ( 2) 00:13:14.837 6.107 - 6.133: 99.5187% ( 1) 00:13:14.837 6.160 - 6.187: 99.5254% ( 1) 00:13:14.837 6.187 - 6.213: 99.5321% ( 1) 00:13:14.837 6.240 - 6.267: 99.5387% ( 1) 00:13:14.837 6.373 - 6.400: 99.5454% ( 1) 00:13:14.837 6.427 - 6.453: 99.5521% ( 1) 00:13:14.837 6.453 - 6.480: 99.5588% ( 1) 00:13:14.837 6.507 - 6.533: 99.5655% ( 1) 00:13:14.837 7.307 - 7.360: 99.5722% ( 1) 00:13:14.837 7.360 - 7.413: 99.5788% ( 1) 00:13:14.837 7.627 - 7.680: 99.5855% ( 1) 00:13:14.837 7.893 - 7.947: 99.5989% ( 2) 00:13:14.837 8.053 - 8.107: 99.6123% ( 2) 00:13:14.837 8.107 - 8.160: 99.6190% ( 1) 00:13:14.837 8.213 - 8.267: 99.6323% ( 2) 00:13:14.837 8.267 - 8.320: 99.6591% ( 4) 00:13:14.837 8.320 - 8.373: 99.6658% ( 1) 00:13:14.837 8.373 - 8.427: 99.6791% ( 2) 00:13:14.837 8.480 - 8.533: 99.6858% ( 1) 00:13:14.837 8.640 - 8.693: 99.6925% ( 1) 00:13:14.837 8.747 - 8.800: 99.7259% ( 5) 00:13:14.837 8.800 - 8.853: 99.7326% ( 1) 00:13:14.837 8.960 - 9.013: 99.7460% ( 2) 00:13:14.837 9.013 - 9.067: 99.7527% ( 1) 00:13:14.837 9.067 - 9.120: 99.7593% ( 1) 00:13:14.837 9.173 - 9.227: 99.7727% ( 2) 00:13:14.837 9.280 - 9.333: 99.7794% ( 1) 00:13:14.837 9.440 - 9.493: 99.7928% ( 2) 00:13:14.837 9.493 - 9.547: 99.8061% ( 2) 00:13:14.837 9.600 - 9.653: 99.8262% ( 3) 00:13:14.837 9.653 - 9.707: 99.8329% ( 1) 00:13:14.837 9.707 - 9.760: 99.8396% ( 1) 00:13:14.837 10.133 - 10.187: 99.8462% ( 1) 00:13:14.837 10.293 - 10.347: 99.8529% ( 1) 00:13:14.837 13.067 - 13.120: 99.8596% ( 1) 00:13:14.837 3986.773 - 4014.080: 100.0000% ( 21) 00:13:14.837 00:13:14.837 Complete histogram 00:13:14.837 ================== 00:13:14.837 Range in us Cumulative Count 00:13:14.837 2.347 - 2.360: 0.0067% ( 1) 00:13:14.837 2.360 - 2.373: 0.8156% ( 121) 00:13:14.837 2.373 - 2.387: 1.6645% ( 127) 00:13:14.837 2.387 - 2.400: 1.7515% ( 13) 00:13:14.837 2.400 - 2.413: 1.9253% ( 26) 00:13:14.837 2.413 - 2.427: 46.9350% ( 6733) 00:13:14.837 2.427 - 2.440: 58.8542% ( 1783) 00:13:14.837 2.440 - 2.453: 71.5756% ( 1903) 00:13:14.837 2.453 - 2.467: 78.2940% ( 1005) 00:13:14.837 2.467 - 2.480: 81.3624% ( 459) 00:13:14.837 2.480 - 2.493: 83.0470% ( 252) 00:13:14.837 2.493 - 2.507: 87.4724% ( 662) 00:13:14.837 2.507 - 2.520: 93.0477% ( 834) 00:13:14.837 2.520 - 2.533: 95.8821% ( 424) 00:13:14.837 2.533 - 2.547: 97.6937% ( 271) 00:13:14.837 2.547 - 2.560: 98.7767% ( 162) 00:13:14.837 2.560 - 2.573: 99.1711% ( 59) 00:13:14.837 2.573 - [2024-04-27 02:32:48.404967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.837 2.587: 99.2446% ( 11) 00:13:14.837 2.587 - 2.600: 99.2513% ( 1) 00:13:14.837 4.880 - 4.907: 99.2580% ( 1) 00:13:14.837 5.840 - 5.867: 99.2647% ( 1) 00:13:14.837 5.893 - 5.920: 99.2713% ( 1) 00:13:14.837 6.053 - 6.080: 99.2780% ( 1) 00:13:14.837 6.080 - 6.107: 99.2847% ( 1) 00:13:14.837 6.160 - 6.187: 99.2981% ( 2) 00:13:14.837 6.187 - 6.213: 99.3048% ( 1) 00:13:14.837 6.267 - 6.293: 99.3115% ( 1) 00:13:14.837 6.320 - 6.347: 99.3248% ( 2) 00:13:14.837 6.347 - 6.373: 99.3315% ( 1) 00:13:14.837 6.373 - 6.400: 99.3382% ( 1) 00:13:14.837 6.400 - 6.427: 99.3449% ( 1) 00:13:14.837 6.453 - 6.480: 99.3516% ( 1) 00:13:14.837 6.507 - 6.533: 99.3582% ( 1) 00:13:14.837 6.587 - 6.613: 99.3649% ( 1) 00:13:14.837 6.613 - 6.640: 99.3716% ( 1) 00:13:14.837 6.773 - 6.800: 99.3783% ( 1) 00:13:14.837 6.827 - 6.880: 99.3917% ( 2) 00:13:14.837 6.880 - 6.933: 99.3984% ( 1) 00:13:14.837 6.933 - 6.987: 99.4117% ( 2) 00:13:14.837 6.987 - 7.040: 99.4184% ( 1) 00:13:14.837 7.040 - 7.093: 99.4452% ( 4) 00:13:14.837 7.093 - 7.147: 99.4518% ( 1) 00:13:14.837 7.253 - 7.307: 99.4585% ( 1) 00:13:14.837 7.307 - 7.360: 99.4652% ( 1) 00:13:14.837 7.360 - 7.413: 99.4719% ( 1) 00:13:14.837 7.467 - 7.520: 99.4786% ( 1) 00:13:14.837 7.520 - 7.573: 99.4853% ( 1) 00:13:14.837 8.053 - 8.107: 99.4919% ( 1) 00:13:14.837 8.267 - 8.320: 99.4986% ( 1) 00:13:14.837 8.640 - 8.693: 99.5053% ( 1) 00:13:14.837 8.693 - 8.747: 99.5120% ( 1) 00:13:14.837 8.853 - 8.907: 99.5187% ( 1) 00:13:14.837 10.240 - 10.293: 99.5254% ( 1) 00:13:14.837 11.093 - 11.147: 99.5321% ( 1) 00:13:14.837 44.373 - 44.587: 99.5387% ( 1) 00:13:14.837 3741.013 - 3768.320: 99.5454% ( 1) 00:13:14.837 3986.773 - 4014.080: 100.0000% ( 68) 00:13:14.837 00:13:14.837 02:32:48 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:14.837 02:32:48 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:14.837 02:32:48 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:14.837 02:32:48 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:14.837 02:32:48 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.098 [ 00:13:15.099 { 00:13:15.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.099 "subtype": "Discovery", 00:13:15.099 "listen_addresses": [], 00:13:15.099 "allow_any_host": true, 00:13:15.099 "hosts": [] 00:13:15.099 }, 00:13:15.099 { 00:13:15.099 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.099 "subtype": "NVMe", 00:13:15.099 "listen_addresses": [ 00:13:15.099 { 00:13:15.099 "transport": "VFIOUSER", 00:13:15.099 "trtype": "VFIOUSER", 00:13:15.099 "adrfam": "IPv4", 00:13:15.099 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.099 "trsvcid": "0" 00:13:15.099 } 00:13:15.099 ], 00:13:15.099 "allow_any_host": true, 00:13:15.099 "hosts": [], 00:13:15.099 "serial_number": "SPDK1", 00:13:15.099 "model_number": "SPDK bdev Controller", 00:13:15.099 "max_namespaces": 32, 00:13:15.099 "min_cntlid": 1, 00:13:15.099 "max_cntlid": 65519, 00:13:15.099 "namespaces": [ 00:13:15.099 { 00:13:15.099 "nsid": 1, 00:13:15.099 "bdev_name": "Malloc1", 00:13:15.099 "name": "Malloc1", 00:13:15.099 "nguid": "546E92D6053A49BAA65753E4E3E52DFF", 00:13:15.099 "uuid": "546e92d6-053a-49ba-a657-53e4e3e52dff" 00:13:15.099 }, 00:13:15.099 { 00:13:15.099 "nsid": 2, 00:13:15.099 "bdev_name": "Malloc3", 00:13:15.099 "name": "Malloc3", 00:13:15.099 "nguid": "B789282DF66547F08CBDE2C5DCF049C4", 00:13:15.099 "uuid": "b789282d-f665-47f0-8cbd-e2c5dcf049c4" 00:13:15.099 } 00:13:15.099 ] 00:13:15.099 }, 00:13:15.099 { 00:13:15.099 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.099 "subtype": "NVMe", 00:13:15.099 "listen_addresses": [ 00:13:15.099 { 00:13:15.099 "transport": "VFIOUSER", 00:13:15.099 "trtype": "VFIOUSER", 00:13:15.099 "adrfam": "IPv4", 00:13:15.099 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.099 "trsvcid": "0" 00:13:15.099 } 00:13:15.099 ], 00:13:15.099 "allow_any_host": true, 00:13:15.099 "hosts": [], 00:13:15.099 "serial_number": "SPDK2", 00:13:15.099 "model_number": "SPDK bdev Controller", 00:13:15.099 "max_namespaces": 32, 00:13:15.099 "min_cntlid": 1, 00:13:15.099 "max_cntlid": 65519, 00:13:15.099 "namespaces": [ 00:13:15.099 { 00:13:15.099 "nsid": 1, 00:13:15.099 "bdev_name": "Malloc2", 00:13:15.099 "name": "Malloc2", 00:13:15.099 "nguid": "1C3EB676D54246C79C976FE5C1E3A5FC", 00:13:15.099 "uuid": "1c3eb676-d542-46c7-9c97-6fe5c1e3a5fc" 00:13:15.099 } 00:13:15.099 ] 00:13:15.099 } 00:13:15.099 ] 00:13:15.099 02:32:48 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:15.099 02:32:48 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:15.099 02:32:48 -- target/nvmf_vfio_user.sh@34 -- # aerpid=48083 00:13:15.099 02:32:48 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:15.099 02:32:48 -- common/autotest_common.sh@1251 -- # local i=0 00:13:15.099 02:32:48 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.099 02:32:48 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.099 02:32:48 -- common/autotest_common.sh@1262 -- # return 0 00:13:15.099 02:32:48 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:15.099 02:32:48 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:15.099 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.359 [2024-04-27 02:32:48.777685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.359 Malloc4 00:13:15.359 02:32:48 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:15.360 [2024-04-27 02:32:48.949752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.360 02:32:48 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.621 Asynchronous Event Request test 00:13:15.621 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.621 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.621 Registering asynchronous event callbacks... 00:13:15.621 Starting namespace attribute notice tests for all controllers... 00:13:15.621 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:15.621 aer_cb - Changed Namespace 00:13:15.621 Cleaning up... 00:13:15.621 [ 00:13:15.621 { 00:13:15.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.621 "subtype": "Discovery", 00:13:15.621 "listen_addresses": [], 00:13:15.621 "allow_any_host": true, 00:13:15.621 "hosts": [] 00:13:15.621 }, 00:13:15.621 { 00:13:15.621 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.621 "subtype": "NVMe", 00:13:15.621 "listen_addresses": [ 00:13:15.621 { 00:13:15.621 "transport": "VFIOUSER", 00:13:15.621 "trtype": "VFIOUSER", 00:13:15.621 "adrfam": "IPv4", 00:13:15.621 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.621 "trsvcid": "0" 00:13:15.621 } 00:13:15.621 ], 00:13:15.621 "allow_any_host": true, 00:13:15.621 "hosts": [], 00:13:15.621 "serial_number": "SPDK1", 00:13:15.621 "model_number": "SPDK bdev Controller", 00:13:15.621 "max_namespaces": 32, 00:13:15.621 "min_cntlid": 1, 00:13:15.621 "max_cntlid": 65519, 00:13:15.621 "namespaces": [ 00:13:15.621 { 00:13:15.621 "nsid": 1, 00:13:15.621 "bdev_name": "Malloc1", 00:13:15.621 "name": "Malloc1", 00:13:15.621 "nguid": "546E92D6053A49BAA65753E4E3E52DFF", 00:13:15.621 "uuid": "546e92d6-053a-49ba-a657-53e4e3e52dff" 00:13:15.621 }, 00:13:15.621 { 00:13:15.621 "nsid": 2, 00:13:15.621 "bdev_name": "Malloc3", 00:13:15.621 "name": "Malloc3", 00:13:15.621 "nguid": "B789282DF66547F08CBDE2C5DCF049C4", 00:13:15.621 "uuid": "b789282d-f665-47f0-8cbd-e2c5dcf049c4" 00:13:15.621 } 00:13:15.621 ] 00:13:15.621 }, 00:13:15.621 { 00:13:15.621 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.621 "subtype": "NVMe", 00:13:15.621 "listen_addresses": [ 00:13:15.621 { 00:13:15.621 "transport": "VFIOUSER", 00:13:15.621 "trtype": "VFIOUSER", 00:13:15.621 "adrfam": "IPv4", 00:13:15.621 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.621 "trsvcid": "0" 00:13:15.621 } 00:13:15.621 ], 00:13:15.621 "allow_any_host": true, 00:13:15.621 "hosts": [], 00:13:15.621 "serial_number": "SPDK2", 00:13:15.621 "model_number": "SPDK bdev Controller", 00:13:15.621 "max_namespaces": 32, 00:13:15.621 "min_cntlid": 1, 00:13:15.621 "max_cntlid": 65519, 00:13:15.621 "namespaces": [ 00:13:15.621 { 00:13:15.621 "nsid": 1, 00:13:15.621 "bdev_name": "Malloc2", 00:13:15.621 "name": "Malloc2", 00:13:15.621 "nguid": "1C3EB676D54246C79C976FE5C1E3A5FC", 00:13:15.621 "uuid": "1c3eb676-d542-46c7-9c97-6fe5c1e3a5fc" 00:13:15.621 }, 00:13:15.621 { 00:13:15.621 "nsid": 2, 00:13:15.621 "bdev_name": "Malloc4", 00:13:15.621 "name": "Malloc4", 00:13:15.621 "nguid": "5C87EDA9B995467D842BB7952A79BF73", 00:13:15.621 "uuid": "5c87eda9-b995-467d-842b-b7952a79bf73" 00:13:15.621 } 00:13:15.621 ] 00:13:15.621 } 00:13:15.621 ] 00:13:15.621 02:32:49 -- target/nvmf_vfio_user.sh@44 -- # wait 48083 00:13:15.621 02:32:49 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:15.621 02:32:49 -- target/nvmf_vfio_user.sh@95 -- # killprocess 37104 00:13:15.621 02:32:49 -- common/autotest_common.sh@936 -- # '[' -z 37104 ']' 00:13:15.621 02:32:49 -- common/autotest_common.sh@940 -- # kill -0 37104 00:13:15.621 02:32:49 -- common/autotest_common.sh@941 -- # uname 00:13:15.621 02:32:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.621 02:32:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 37104 00:13:15.621 02:32:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.621 02:32:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.621 02:32:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 37104' 00:13:15.621 killing process with pid 37104 00:13:15.621 02:32:49 -- common/autotest_common.sh@955 -- # kill 37104 00:13:15.621 [2024-04-27 02:32:49.199042] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:15.621 02:32:49 -- common/autotest_common.sh@960 -- # wait 37104 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=48142 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 48142' 00:13:15.882 Process pid: 48142 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:15.882 02:32:49 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 48142 00:13:15.882 02:32:49 -- common/autotest_common.sh@817 -- # '[' -z 48142 ']' 00:13:15.882 02:32:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.882 02:32:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:15.882 02:32:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.882 02:32:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:15.882 02:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:15.882 [2024-04-27 02:32:49.426254] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:15.882 [2024-04-27 02:32:49.427179] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:13:15.882 [2024-04-27 02:32:49.427226] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.882 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.882 [2024-04-27 02:32:49.486865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.143 [2024-04-27 02:32:49.550838] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.143 [2024-04-27 02:32:49.550877] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.143 [2024-04-27 02:32:49.550885] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.143 [2024-04-27 02:32:49.550891] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.143 [2024-04-27 02:32:49.550896] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.143 [2024-04-27 02:32:49.551013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.143 [2024-04-27 02:32:49.551147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.143 [2024-04-27 02:32:49.551306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.143 [2024-04-27 02:32:49.551330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.143 [2024-04-27 02:32:49.616179] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:16.143 [2024-04-27 02:32:49.616308] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:16.143 [2024-04-27 02:32:49.616491] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:16.143 [2024-04-27 02:32:49.616807] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:16.143 [2024-04-27 02:32:49.616896] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:16.717 02:32:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.717 02:32:50 -- common/autotest_common.sh@850 -- # return 0 00:13:16.717 02:32:50 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:17.658 02:32:51 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:17.919 02:32:51 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:17.919 02:32:51 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:17.919 02:32:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.919 02:32:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:17.919 02:32:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:17.919 Malloc1 00:13:18.179 02:32:51 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:18.179 02:32:51 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:18.440 02:32:51 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:18.440 02:32:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.440 02:32:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:18.440 02:32:52 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:18.815 Malloc2 00:13:18.815 02:32:52 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:18.815 02:32:52 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:19.122 02:32:52 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:19.122 02:32:52 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:19.122 02:32:52 -- target/nvmf_vfio_user.sh@95 -- # killprocess 48142 00:13:19.122 02:32:52 -- common/autotest_common.sh@936 -- # '[' -z 48142 ']' 00:13:19.122 02:32:52 -- common/autotest_common.sh@940 -- # kill -0 48142 00:13:19.122 02:32:52 -- common/autotest_common.sh@941 -- # uname 00:13:19.122 02:32:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.122 02:32:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 48142 00:13:19.122 02:32:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:19.122 02:32:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:19.122 02:32:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 48142' 00:13:19.122 killing process with pid 48142 00:13:19.122 02:32:52 -- common/autotest_common.sh@955 -- # kill 48142 00:13:19.122 02:32:52 -- common/autotest_common.sh@960 -- # wait 48142 00:13:19.383 02:32:52 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:19.383 02:32:52 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:19.383 00:13:19.383 real 0m50.788s 00:13:19.383 user 3m21.361s 00:13:19.383 sys 0m2.970s 00:13:19.383 02:32:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:19.383 02:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:19.383 ************************************ 00:13:19.383 END TEST nvmf_vfio_user 00:13:19.383 ************************************ 00:13:19.383 02:32:52 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:19.383 02:32:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:19.383 02:32:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.383 02:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:19.645 ************************************ 00:13:19.645 START TEST nvmf_vfio_user_nvme_compliance 00:13:19.645 ************************************ 00:13:19.645 02:32:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:19.645 * Looking for test storage... 00:13:19.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:19.645 02:32:53 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.645 02:32:53 -- nvmf/common.sh@7 -- # uname -s 00:13:19.645 02:32:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.645 02:32:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.645 02:32:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.645 02:32:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.645 02:32:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.645 02:32:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.645 02:32:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.645 02:32:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.645 02:32:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.645 02:32:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.645 02:32:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:19.645 02:32:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:19.645 02:32:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.645 02:32:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.645 02:32:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.645 02:32:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.645 02:32:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.645 02:32:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.645 02:32:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.645 02:32:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.645 02:32:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.645 02:32:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.645 02:32:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.645 02:32:53 -- paths/export.sh@5 -- # export PATH 00:13:19.645 02:32:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.645 02:32:53 -- nvmf/common.sh@47 -- # : 0 00:13:19.645 02:32:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.645 02:32:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.645 02:32:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.645 02:32:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.645 02:32:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.645 02:32:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.645 02:32:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.645 02:32:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.645 02:32:53 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.645 02:32:53 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:19.645 02:32:53 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:19.645 02:32:53 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:19.645 02:32:53 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:19.645 02:32:53 -- compliance/compliance.sh@20 -- # nvmfpid=49106 00:13:19.645 02:32:53 -- compliance/compliance.sh@21 -- # echo 'Process pid: 49106' 00:13:19.645 Process pid: 49106 00:13:19.645 02:32:53 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:19.645 02:32:53 -- compliance/compliance.sh@24 -- # waitforlisten 49106 00:13:19.645 02:32:53 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:19.645 02:32:53 -- common/autotest_common.sh@817 -- # '[' -z 49106 ']' 00:13:19.645 02:32:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.645 02:32:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.645 02:32:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.645 02:32:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.645 02:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.645 [2024-04-27 02:32:53.233601] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:13:19.645 [2024-04-27 02:32:53.233681] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.645 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.906 [2024-04-27 02:32:53.298553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.906 [2024-04-27 02:32:53.371119] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.907 [2024-04-27 02:32:53.371158] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.907 [2024-04-27 02:32:53.371166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.907 [2024-04-27 02:32:53.371173] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.907 [2024-04-27 02:32:53.371179] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.907 [2024-04-27 02:32:53.371346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.907 [2024-04-27 02:32:53.371584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.907 [2024-04-27 02:32:53.371589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.477 02:32:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:20.477 02:32:54 -- common/autotest_common.sh@850 -- # return 0 00:13:20.477 02:32:54 -- compliance/compliance.sh@26 -- # sleep 1 00:13:21.418 02:32:55 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:21.418 02:32:55 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:21.418 02:32:55 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:21.418 02:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.418 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:21.418 02:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.418 02:32:55 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:21.418 02:32:55 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:21.418 02:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.418 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:21.679 malloc0 00:13:21.679 02:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.679 02:32:55 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:21.679 02:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.679 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:21.679 02:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.679 02:32:55 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:21.679 02:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.679 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:21.679 02:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.679 02:32:55 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:21.679 02:32:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.679 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:21.679 02:32:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.679 02:32:55 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:21.679 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.679 00:13:21.679 00:13:21.679 CUnit - A unit testing framework for C - Version 2.1-3 00:13:21.679 http://cunit.sourceforge.net/ 00:13:21.679 00:13:21.679 00:13:21.679 Suite: nvme_compliance 00:13:21.679 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-27 02:32:55.262716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.679 [2024-04-27 02:32:55.264072] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:21.679 [2024-04-27 02:32:55.264086] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:21.679 [2024-04-27 02:32:55.264092] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:21.679 [2024-04-27 02:32:55.266747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.938 passed 00:13:21.938 Test: admin_identify_ctrlr_verify_fused ...[2024-04-27 02:32:55.361338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.939 [2024-04-27 02:32:55.364357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.939 passed 00:13:21.939 Test: admin_identify_ns ...[2024-04-27 02:32:55.460530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.939 [2024-04-27 02:32:55.527286] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:21.939 [2024-04-27 02:32:55.535300] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:21.939 [2024-04-27 02:32:55.554497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.198 passed 00:13:22.198 Test: admin_get_features_mandatory_features ...[2024-04-27 02:32:55.646161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.198 [2024-04-27 02:32:55.649179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.198 passed 00:13:22.198 Test: admin_get_features_optional_features ...[2024-04-27 02:32:55.745750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.198 [2024-04-27 02:32:55.748767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.198 passed 00:13:22.458 Test: admin_set_features_number_of_queues ...[2024-04-27 02:32:55.841878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.458 [2024-04-27 02:32:55.946385] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.458 passed 00:13:22.458 Test: admin_get_log_page_mandatory_logs ...[2024-04-27 02:32:56.038394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.458 [2024-04-27 02:32:56.041412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.758 passed 00:13:22.758 Test: admin_get_log_page_with_lpo ...[2024-04-27 02:32:56.134529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.758 [2024-04-27 02:32:56.206290] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:22.758 [2024-04-27 02:32:56.219354] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.758 passed 00:13:22.758 Test: fabric_property_get ...[2024-04-27 02:32:56.308988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.759 [2024-04-27 02:32:56.310250] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:22.759 [2024-04-27 02:32:56.312008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.759 passed 00:13:23.019 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-27 02:32:56.405714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.019 [2024-04-27 02:32:56.406950] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:23.019 [2024-04-27 02:32:56.408735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.019 passed 00:13:23.019 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-27 02:32:56.499860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.019 [2024-04-27 02:32:56.582284] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.019 [2024-04-27 02:32:56.598288] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.019 [2024-04-27 02:32:56.603384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.279 passed 00:13:23.279 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-27 02:32:56.697389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.279 [2024-04-27 02:32:56.698616] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:23.279 [2024-04-27 02:32:56.700412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.279 passed 00:13:23.279 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-27 02:32:56.793533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.279 [2024-04-27 02:32:56.869294] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:23.279 [2024-04-27 02:32:56.893287] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.279 [2024-04-27 02:32:56.898360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.539 passed 00:13:23.539 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-27 02:32:56.992374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.539 [2024-04-27 02:32:56.993599] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:23.539 [2024-04-27 02:32:56.993617] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:23.539 [2024-04-27 02:32:56.995388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.539 passed 00:13:23.539 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-27 02:32:57.088522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.800 [2024-04-27 02:32:57.180283] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:23.800 [2024-04-27 02:32:57.188281] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:23.800 [2024-04-27 02:32:57.196284] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:23.800 [2024-04-27 02:32:57.204285] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:23.800 [2024-04-27 02:32:57.233363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.800 passed 00:13:23.800 Test: admin_create_io_sq_verify_pc ...[2024-04-27 02:32:57.327349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.800 [2024-04-27 02:32:57.347292] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:23.800 [2024-04-27 02:32:57.364519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.800 passed 00:13:24.060 Test: admin_create_io_qp_max_qps ...[2024-04-27 02:32:57.455069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.003 [2024-04-27 02:32:58.553288] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:25.575 [2024-04-27 02:32:58.942006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.575 passed 00:13:25.576 Test: admin_create_io_sq_shared_cq ...[2024-04-27 02:32:59.033224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.576 [2024-04-27 02:32:59.166294] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:25.837 [2024-04-27 02:32:59.203349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.837 passed 00:13:25.837 00:13:25.837 Run Summary: Type Total Ran Passed Failed Inactive 00:13:25.837 suites 1 1 n/a 0 0 00:13:25.837 tests 18 18 18 0 0 00:13:25.837 asserts 360 360 360 0 n/a 00:13:25.837 00:13:25.837 Elapsed time = 1.651 seconds 00:13:25.837 02:32:59 -- compliance/compliance.sh@42 -- # killprocess 49106 00:13:25.837 02:32:59 -- common/autotest_common.sh@936 -- # '[' -z 49106 ']' 00:13:25.837 02:32:59 -- common/autotest_common.sh@940 -- # kill -0 49106 00:13:25.837 02:32:59 -- common/autotest_common.sh@941 -- # uname 00:13:25.837 02:32:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.837 02:32:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 49106 00:13:25.837 02:32:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:25.837 02:32:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:25.837 02:32:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 49106' 00:13:25.837 killing process with pid 49106 00:13:25.837 02:32:59 -- common/autotest_common.sh@955 -- # kill 49106 00:13:25.837 02:32:59 -- common/autotest_common.sh@960 -- # wait 49106 00:13:25.837 02:32:59 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:25.837 02:32:59 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:25.837 00:13:25.837 real 0m6.409s 00:13:25.837 user 0m18.332s 00:13:25.837 sys 0m0.477s 00:13:25.837 02:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.837 02:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:25.837 ************************************ 00:13:25.837 END TEST nvmf_vfio_user_nvme_compliance 00:13:25.837 ************************************ 00:13:26.099 02:32:59 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:26.099 02:32:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:26.099 02:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:26.099 02:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:26.099 ************************************ 00:13:26.099 START TEST nvmf_vfio_user_fuzz 00:13:26.099 ************************************ 00:13:26.099 02:32:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:26.362 * Looking for test storage... 00:13:26.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.362 02:32:59 -- nvmf/common.sh@7 -- # uname -s 00:13:26.362 02:32:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.362 02:32:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.362 02:32:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.362 02:32:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.362 02:32:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.362 02:32:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.362 02:32:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.362 02:32:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.362 02:32:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.362 02:32:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.362 02:32:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.362 02:32:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.362 02:32:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.362 02:32:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.362 02:32:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.362 02:32:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.362 02:32:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.362 02:32:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.362 02:32:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.362 02:32:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.362 02:32:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.362 02:32:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.362 02:32:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.362 02:32:59 -- paths/export.sh@5 -- # export PATH 00:13:26.362 02:32:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.362 02:32:59 -- nvmf/common.sh@47 -- # : 0 00:13:26.362 02:32:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.362 02:32:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.362 02:32:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.362 02:32:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.362 02:32:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.362 02:32:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.362 02:32:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.362 02:32:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=50363 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 50363' 00:13:26.362 Process pid: 50363 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:26.362 02:32:59 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 50363 00:13:26.362 02:32:59 -- common/autotest_common.sh@817 -- # '[' -z 50363 ']' 00:13:26.362 02:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.362 02:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.362 02:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.362 02:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.362 02:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:27.307 02:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.307 02:33:00 -- common/autotest_common.sh@850 -- # return 0 00:13:27.307 02:33:00 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:28.251 02:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.251 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 02:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:28.251 02:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.251 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 malloc0 00:13:28.251 02:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:28.251 02:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.251 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 02:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:28.251 02:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.251 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 02:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:28.251 02:33:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.251 02:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 02:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:28.251 02:33:01 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:00.374 Fuzzing completed. Shutting down the fuzz application 00:14:00.374 00:14:00.374 Dumping successful admin opcodes: 00:14:00.374 8, 9, 10, 24, 00:14:00.374 Dumping successful io opcodes: 00:14:00.374 0, 00:14:00.374 NS: 0x200003a1ef00 I/O qp, Total commands completed: 983614, total successful commands: 3855, random_seed: 3141412288 00:14:00.374 NS: 0x200003a1ef00 admin qp, Total commands completed: 242042, total successful commands: 1944, random_seed: 280797440 00:14:00.374 02:33:32 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:00.374 02:33:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.374 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:00.374 02:33:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.374 02:33:32 -- target/vfio_user_fuzz.sh@46 -- # killprocess 50363 00:14:00.374 02:33:32 -- common/autotest_common.sh@936 -- # '[' -z 50363 ']' 00:14:00.374 02:33:32 -- common/autotest_common.sh@940 -- # kill -0 50363 00:14:00.374 02:33:32 -- common/autotest_common.sh@941 -- # uname 00:14:00.374 02:33:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:00.374 02:33:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 50363 00:14:00.374 02:33:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:00.374 02:33:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:00.374 02:33:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 50363' 00:14:00.374 killing process with pid 50363 00:14:00.374 02:33:32 -- common/autotest_common.sh@955 -- # kill 50363 00:14:00.374 02:33:32 -- common/autotest_common.sh@960 -- # wait 50363 00:14:00.374 02:33:32 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:00.374 02:33:32 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:00.374 00:14:00.374 real 0m32.703s 00:14:00.374 user 0m36.610s 00:14:00.374 sys 0m24.622s 00:14:00.374 02:33:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:00.374 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:00.374 ************************************ 00:14:00.374 END TEST nvmf_vfio_user_fuzz 00:14:00.374 ************************************ 00:14:00.374 02:33:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:00.374 02:33:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.374 02:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.374 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:00.374 ************************************ 00:14:00.375 START TEST nvmf_host_management 00:14:00.375 ************************************ 00:14:00.375 02:33:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:00.375 * Looking for test storage... 00:14:00.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.375 02:33:32 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.375 02:33:32 -- nvmf/common.sh@7 -- # uname -s 00:14:00.375 02:33:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.375 02:33:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.375 02:33:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.375 02:33:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.375 02:33:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.375 02:33:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.375 02:33:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.375 02:33:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.375 02:33:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.375 02:33:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.375 02:33:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.375 02:33:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.375 02:33:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.375 02:33:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.375 02:33:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.375 02:33:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.375 02:33:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.375 02:33:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.375 02:33:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.375 02:33:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.375 02:33:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.375 02:33:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.375 02:33:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.375 02:33:32 -- paths/export.sh@5 -- # export PATH 00:14:00.375 02:33:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.375 02:33:32 -- nvmf/common.sh@47 -- # : 0 00:14:00.375 02:33:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.375 02:33:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.375 02:33:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.375 02:33:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.375 02:33:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.375 02:33:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.375 02:33:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.375 02:33:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.375 02:33:32 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.375 02:33:32 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.375 02:33:32 -- target/host_management.sh@105 -- # nvmftestinit 00:14:00.375 02:33:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:00.375 02:33:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.375 02:33:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:00.375 02:33:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:00.375 02:33:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:00.375 02:33:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.375 02:33:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.375 02:33:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.375 02:33:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:00.375 02:33:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:00.375 02:33:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.375 02:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:05.671 02:33:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:05.671 02:33:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.671 02:33:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.671 02:33:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.671 02:33:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.671 02:33:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.671 02:33:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.672 02:33:38 -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.672 02:33:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.672 02:33:38 -- nvmf/common.sh@296 -- # e810=() 00:14:05.672 02:33:38 -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.672 02:33:38 -- nvmf/common.sh@297 -- # x722=() 00:14:05.672 02:33:38 -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.672 02:33:38 -- nvmf/common.sh@298 -- # mlx=() 00:14:05.672 02:33:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.672 02:33:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.672 02:33:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.672 02:33:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.672 02:33:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.672 02:33:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.672 02:33:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:05.672 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:05.672 02:33:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.672 02:33:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:05.672 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:05.672 02:33:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.672 02:33:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.672 02:33:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.672 02:33:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.672 02:33:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.672 02:33:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:05.672 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:05.672 02:33:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.672 02:33:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.672 02:33:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.672 02:33:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.672 02:33:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.672 02:33:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:05.672 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:05.672 02:33:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.672 02:33:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:05.672 02:33:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:05.672 02:33:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:05.672 02:33:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:05.672 02:33:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.672 02:33:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.672 02:33:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.672 02:33:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.672 02:33:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.672 02:33:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.672 02:33:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.672 02:33:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.672 02:33:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.672 02:33:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.672 02:33:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.672 02:33:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.672 02:33:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.672 02:33:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.672 02:33:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.672 02:33:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.672 02:33:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.672 02:33:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.672 02:33:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.672 02:33:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:14:05.672 00:14:05.672 --- 10.0.0.2 ping statistics --- 00:14:05.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.672 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:14:05.672 02:33:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:14:05.672 00:14:05.672 --- 10.0.0.1 ping statistics --- 00:14:05.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.672 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:14:05.672 02:33:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.672 02:33:39 -- nvmf/common.sh@411 -- # return 0 00:14:05.672 02:33:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:05.672 02:33:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.672 02:33:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:05.672 02:33:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:05.672 02:33:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.672 02:33:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:05.672 02:33:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:05.935 02:33:39 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:14:05.935 02:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:05.935 02:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.935 02:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:05.935 ************************************ 00:14:05.935 START TEST nvmf_host_management 00:14:05.935 ************************************ 00:14:05.935 02:33:39 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:14:05.935 02:33:39 -- target/host_management.sh@69 -- # starttarget 00:14:05.935 02:33:39 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:05.935 02:33:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.935 02:33:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.935 02:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:05.935 02:33:39 -- nvmf/common.sh@470 -- # nvmfpid=61015 00:14:05.935 02:33:39 -- nvmf/common.sh@471 -- # waitforlisten 61015 00:14:05.935 02:33:39 -- common/autotest_common.sh@817 -- # '[' -z 61015 ']' 00:14:05.935 02:33:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:05.935 02:33:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.935 02:33:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.935 02:33:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.935 02:33:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.935 02:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:05.935 [2024-04-27 02:33:39.510848] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:05.935 [2024-04-27 02:33:39.510908] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.935 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.197 [2024-04-27 02:33:39.583257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.197 [2024-04-27 02:33:39.656772] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.197 [2024-04-27 02:33:39.656812] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.197 [2024-04-27 02:33:39.656822] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.197 [2024-04-27 02:33:39.656829] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.197 [2024-04-27 02:33:39.656836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.197 [2024-04-27 02:33:39.656974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.197 [2024-04-27 02:33:39.657098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.197 [2024-04-27 02:33:39.657257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.197 [2024-04-27 02:33:39.657258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.769 02:33:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.769 02:33:40 -- common/autotest_common.sh@850 -- # return 0 00:14:06.769 02:33:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:06.769 02:33:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:06.769 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:06.769 02:33:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.769 02:33:40 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.769 02:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.769 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:06.769 [2024-04-27 02:33:40.335840] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.769 02:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.769 02:33:40 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:06.769 02:33:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:06.769 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:06.769 02:33:40 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:06.769 02:33:40 -- target/host_management.sh@23 -- # cat 00:14:06.769 02:33:40 -- target/host_management.sh@30 -- # rpc_cmd 00:14:06.769 02:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.769 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:06.769 Malloc0 00:14:07.030 [2024-04-27 02:33:40.398913] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.030 02:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.030 02:33:40 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:07.030 02:33:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:07.030 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:07.030 02:33:40 -- target/host_management.sh@73 -- # perfpid=61193 00:14:07.030 02:33:40 -- target/host_management.sh@74 -- # waitforlisten 61193 /var/tmp/bdevperf.sock 00:14:07.030 02:33:40 -- common/autotest_common.sh@817 -- # '[' -z 61193 ']' 00:14:07.030 02:33:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.030 02:33:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:07.030 02:33:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.030 02:33:40 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:07.030 02:33:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:07.030 02:33:40 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:07.030 02:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:07.030 02:33:40 -- nvmf/common.sh@521 -- # config=() 00:14:07.030 02:33:40 -- nvmf/common.sh@521 -- # local subsystem config 00:14:07.030 02:33:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:07.030 02:33:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:07.030 { 00:14:07.030 "params": { 00:14:07.030 "name": "Nvme$subsystem", 00:14:07.030 "trtype": "$TEST_TRANSPORT", 00:14:07.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.030 "adrfam": "ipv4", 00:14:07.030 "trsvcid": "$NVMF_PORT", 00:14:07.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.030 "hdgst": ${hdgst:-false}, 00:14:07.030 "ddgst": ${ddgst:-false} 00:14:07.030 }, 00:14:07.030 "method": "bdev_nvme_attach_controller" 00:14:07.030 } 00:14:07.030 EOF 00:14:07.030 )") 00:14:07.030 02:33:40 -- nvmf/common.sh@543 -- # cat 00:14:07.030 02:33:40 -- nvmf/common.sh@545 -- # jq . 00:14:07.030 02:33:40 -- nvmf/common.sh@546 -- # IFS=, 00:14:07.030 02:33:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:07.030 "params": { 00:14:07.030 "name": "Nvme0", 00:14:07.030 "trtype": "tcp", 00:14:07.030 "traddr": "10.0.0.2", 00:14:07.030 "adrfam": "ipv4", 00:14:07.030 "trsvcid": "4420", 00:14:07.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:07.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:07.030 "hdgst": false, 00:14:07.030 "ddgst": false 00:14:07.030 }, 00:14:07.030 "method": "bdev_nvme_attach_controller" 00:14:07.030 }' 00:14:07.031 [2024-04-27 02:33:40.498186] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:07.031 [2024-04-27 02:33:40.498235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ] 00:14:07.031 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.031 [2024-04-27 02:33:40.556839] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.031 [2024-04-27 02:33:40.619665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.290 Running I/O for 10 seconds... 00:14:07.862 02:33:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.862 02:33:41 -- common/autotest_common.sh@850 -- # return 0 00:14:07.862 02:33:41 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:07.862 02:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.862 02:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:07.862 02:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.862 02:33:41 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.862 02:33:41 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:07.862 02:33:41 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:07.862 02:33:41 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:07.862 02:33:41 -- target/host_management.sh@52 -- # local ret=1 00:14:07.862 02:33:41 -- target/host_management.sh@53 -- # local i 00:14:07.862 02:33:41 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:07.862 02:33:41 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:07.862 02:33:41 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:07.862 02:33:41 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:07.862 02:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.862 02:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:07.862 02:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.862 02:33:41 -- target/host_management.sh@55 -- # read_io_count=449 00:14:07.862 02:33:41 -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:14:07.862 02:33:41 -- target/host_management.sh@59 -- # ret=0 00:14:07.862 02:33:41 -- target/host_management.sh@60 -- # break 00:14:07.862 02:33:41 -- target/host_management.sh@64 -- # return 0 00:14:07.862 02:33:41 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.862 02:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.862 02:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:07.862 [2024-04-27 02:33:41.341949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc06f0 is same with the state(5) to be set 00:14:07.862 [2024-04-27 02:33:41.342730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.862 [2024-04-27 02:33:41.342916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.862 [2024-04-27 02:33:41.342926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.342934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.342949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.342958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.342968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.342976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.342987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.342995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.863 [2024-04-27 02:33:41.343664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.863 [2024-04-27 02:33:41.343672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.864 [2024-04-27 02:33:41.343953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.343963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9fba0 is same with the state(5) to be set 00:14:07.864 [2024-04-27 02:33:41.344003] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf9fba0 was disconnected and freed. reset controller. 00:14:07.864 [2024-04-27 02:33:41.344043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.864 [2024-04-27 02:33:41.344054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.344063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.864 [2024-04-27 02:33:41.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.344080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.864 [2024-04-27 02:33:41.344089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.344098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.864 [2024-04-27 02:33:41.344106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.864 [2024-04-27 02:33:41.344113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8ef90 is same with the state(5) to be set 00:14:07.864 [2024-04-27 02:33:41.345316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:07.864 02:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.864 task offset: 63488 on job bdev=Nvme0n1 fails 00:14:07.864 00:14:07.864 Latency(us) 00:14:07.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.864 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.864 Job: Nvme0n1 ended in about 0.50 seconds with error 00:14:07.864 Verification LBA range: start 0x0 length 0x400 00:14:07.864 Nvme0n1 : 0.50 910.69 56.92 128.95 0.00 60071.98 1856.85 56797.87 00:14:07.864 =================================================================================================================== 00:14:07.864 Total : 910.69 56.92 128.95 0.00 60071.98 1856.85 56797.87 00:14:07.864 02:33:41 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.864 [2024-04-27 02:33:41.347295] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.864 [2024-04-27 02:33:41.347316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8ef90 (9): Bad file descriptor 00:14:07.864 02:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.864 02:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:07.864 02:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.864 02:33:41 -- target/host_management.sh@87 -- # sleep 1 00:14:07.864 [2024-04-27 02:33:41.398157] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:08.803 02:33:42 -- target/host_management.sh@91 -- # kill -9 61193 00:14:08.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (61193) - No such process 00:14:08.803 02:33:42 -- target/host_management.sh@91 -- # true 00:14:08.803 02:33:42 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:08.803 02:33:42 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:08.803 02:33:42 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:08.803 02:33:42 -- nvmf/common.sh@521 -- # config=() 00:14:08.803 02:33:42 -- nvmf/common.sh@521 -- # local subsystem config 00:14:08.803 02:33:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:08.803 02:33:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:08.803 { 00:14:08.803 "params": { 00:14:08.803 "name": "Nvme$subsystem", 00:14:08.803 "trtype": "$TEST_TRANSPORT", 00:14:08.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.803 "adrfam": "ipv4", 00:14:08.803 "trsvcid": "$NVMF_PORT", 00:14:08.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.803 "hdgst": ${hdgst:-false}, 00:14:08.803 "ddgst": ${ddgst:-false} 00:14:08.803 }, 00:14:08.803 "method": "bdev_nvme_attach_controller" 00:14:08.803 } 00:14:08.803 EOF 00:14:08.803 )") 00:14:08.803 02:33:42 -- nvmf/common.sh@543 -- # cat 00:14:08.803 02:33:42 -- nvmf/common.sh@545 -- # jq . 00:14:08.803 02:33:42 -- nvmf/common.sh@546 -- # IFS=, 00:14:08.803 02:33:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:08.803 "params": { 00:14:08.803 "name": "Nvme0", 00:14:08.803 "trtype": "tcp", 00:14:08.803 "traddr": "10.0.0.2", 00:14:08.803 "adrfam": "ipv4", 00:14:08.803 "trsvcid": "4420", 00:14:08.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:08.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:08.803 "hdgst": false, 00:14:08.803 "ddgst": false 00:14:08.803 }, 00:14:08.803 "method": "bdev_nvme_attach_controller" 00:14:08.803 }' 00:14:08.803 [2024-04-27 02:33:42.410653] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:08.803 [2024-04-27 02:33:42.410709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61543 ] 00:14:09.063 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.063 [2024-04-27 02:33:42.469425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.063 [2024-04-27 02:33:42.531496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.323 Running I/O for 1 seconds... 00:14:10.264 00:14:10.264 Latency(us) 00:14:10.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.264 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:10.264 Verification LBA range: start 0x0 length 0x400 00:14:10.264 Nvme0n1 : 1.01 1208.57 75.54 0.00 0.00 52127.15 8192.00 59856.21 00:14:10.264 =================================================================================================================== 00:14:10.264 Total : 1208.57 75.54 0.00 0.00 52127.15 8192.00 59856.21 00:14:10.526 02:33:43 -- target/host_management.sh@102 -- # stoptarget 00:14:10.526 02:33:43 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:10.526 02:33:43 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:10.526 02:33:43 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:10.526 02:33:43 -- target/host_management.sh@40 -- # nvmftestfini 00:14:10.526 02:33:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:10.526 02:33:43 -- nvmf/common.sh@117 -- # sync 00:14:10.526 02:33:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.526 02:33:43 -- nvmf/common.sh@120 -- # set +e 00:14:10.526 02:33:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.526 02:33:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.526 rmmod nvme_tcp 00:14:10.526 rmmod nvme_fabrics 00:14:10.526 rmmod nvme_keyring 00:14:10.526 02:33:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.526 02:33:44 -- nvmf/common.sh@124 -- # set -e 00:14:10.526 02:33:44 -- nvmf/common.sh@125 -- # return 0 00:14:10.526 02:33:44 -- nvmf/common.sh@478 -- # '[' -n 61015 ']' 00:14:10.526 02:33:44 -- nvmf/common.sh@479 -- # killprocess 61015 00:14:10.526 02:33:44 -- common/autotest_common.sh@936 -- # '[' -z 61015 ']' 00:14:10.526 02:33:44 -- common/autotest_common.sh@940 -- # kill -0 61015 00:14:10.526 02:33:44 -- common/autotest_common.sh@941 -- # uname 00:14:10.526 02:33:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.526 02:33:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61015 00:14:10.526 02:33:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:10.526 02:33:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:10.526 02:33:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61015' 00:14:10.526 killing process with pid 61015 00:14:10.526 02:33:44 -- common/autotest_common.sh@955 -- # kill 61015 00:14:10.526 02:33:44 -- common/autotest_common.sh@960 -- # wait 61015 00:14:10.787 [2024-04-27 02:33:44.191151] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:10.787 02:33:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:10.787 02:33:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:10.787 02:33:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:10.787 02:33:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.787 02:33:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.787 02:33:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.787 02:33:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.787 02:33:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.701 02:33:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:12.701 00:14:12.701 real 0m6.842s 00:14:12.701 user 0m20.826s 00:14:12.701 sys 0m0.944s 00:14:12.702 02:33:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.702 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:12.702 ************************************ 00:14:12.702 END TEST nvmf_host_management 00:14:12.702 ************************************ 00:14:12.961 02:33:46 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:12.961 00:14:12.961 real 0m13.816s 00:14:12.961 user 0m22.762s 00:14:12.961 sys 0m5.897s 00:14:12.961 02:33:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.961 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:12.961 ************************************ 00:14:12.961 END TEST nvmf_host_management 00:14:12.961 ************************************ 00:14:12.961 02:33:46 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:12.961 02:33:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:12.961 02:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.961 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:12.961 ************************************ 00:14:12.961 START TEST nvmf_lvol 00:14:12.961 ************************************ 00:14:12.961 02:33:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:13.222 * Looking for test storage... 00:14:13.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.222 02:33:46 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.222 02:33:46 -- nvmf/common.sh@7 -- # uname -s 00:14:13.222 02:33:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.222 02:33:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.222 02:33:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.222 02:33:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.222 02:33:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.222 02:33:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.222 02:33:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.222 02:33:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.222 02:33:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.222 02:33:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.222 02:33:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.222 02:33:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.222 02:33:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.222 02:33:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.222 02:33:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.222 02:33:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.222 02:33:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.222 02:33:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.222 02:33:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.222 02:33:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.222 02:33:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.223 02:33:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.223 02:33:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.223 02:33:46 -- paths/export.sh@5 -- # export PATH 00:14:13.223 02:33:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.223 02:33:46 -- nvmf/common.sh@47 -- # : 0 00:14:13.223 02:33:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.223 02:33:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.223 02:33:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.223 02:33:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.223 02:33:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.223 02:33:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.223 02:33:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.223 02:33:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.223 02:33:46 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.223 02:33:46 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.223 02:33:46 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:13.223 02:33:46 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:13.223 02:33:46 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.223 02:33:46 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:13.223 02:33:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:13.223 02:33:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.223 02:33:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:13.223 02:33:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:13.223 02:33:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:13.223 02:33:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.223 02:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.223 02:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.223 02:33:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:13.223 02:33:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:13.223 02:33:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.223 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:19.827 02:33:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.827 02:33:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.827 02:33:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.827 02:33:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.827 02:33:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.827 02:33:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.827 02:33:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.827 02:33:53 -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.827 02:33:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.827 02:33:53 -- nvmf/common.sh@296 -- # e810=() 00:14:19.827 02:33:53 -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.827 02:33:53 -- nvmf/common.sh@297 -- # x722=() 00:14:19.827 02:33:53 -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.827 02:33:53 -- nvmf/common.sh@298 -- # mlx=() 00:14:19.827 02:33:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.827 02:33:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.827 02:33:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.827 02:33:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.827 02:33:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.827 02:33:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.827 02:33:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:19.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:19.827 02:33:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.827 02:33:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:19.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:19.827 02:33:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.827 02:33:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.827 02:33:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.827 02:33:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:19.827 02:33:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.827 02:33:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:19.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:19.827 02:33:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.827 02:33:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.827 02:33:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.827 02:33:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:19.827 02:33:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.827 02:33:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:19.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:19.827 02:33:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.827 02:33:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:19.827 02:33:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:19.827 02:33:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:19.827 02:33:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:19.827 02:33:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.827 02:33:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.827 02:33:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.827 02:33:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.827 02:33:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.827 02:33:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.827 02:33:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.827 02:33:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.827 02:33:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.827 02:33:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.827 02:33:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.827 02:33:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.827 02:33:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.827 02:33:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.827 02:33:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.827 02:33:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.827 02:33:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.088 02:33:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.088 02:33:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.088 02:33:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:14:20.088 00:14:20.088 --- 10.0.0.2 ping statistics --- 00:14:20.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.088 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:14:20.088 02:33:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:14:20.088 00:14:20.088 --- 10.0.0.1 ping statistics --- 00:14:20.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.088 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:14:20.088 02:33:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.088 02:33:53 -- nvmf/common.sh@411 -- # return 0 00:14:20.088 02:33:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:20.088 02:33:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.088 02:33:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:20.088 02:33:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:20.088 02:33:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.088 02:33:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:20.088 02:33:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:20.088 02:33:53 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:20.088 02:33:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:20.088 02:33:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:20.088 02:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:20.088 02:33:53 -- nvmf/common.sh@470 -- # nvmfpid=66203 00:14:20.088 02:33:53 -- nvmf/common.sh@471 -- # waitforlisten 66203 00:14:20.088 02:33:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:20.088 02:33:53 -- common/autotest_common.sh@817 -- # '[' -z 66203 ']' 00:14:20.088 02:33:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.088 02:33:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:20.088 02:33:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.088 02:33:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:20.089 02:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:20.089 [2024-04-27 02:33:53.622014] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:20.089 [2024-04-27 02:33:53.622065] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.089 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.089 [2024-04-27 02:33:53.687626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.349 [2024-04-27 02:33:53.750704] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.349 [2024-04-27 02:33:53.750744] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.349 [2024-04-27 02:33:53.750751] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.349 [2024-04-27 02:33:53.750758] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.349 [2024-04-27 02:33:53.750764] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.349 [2024-04-27 02:33:53.750932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.349 [2024-04-27 02:33:53.751041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.349 [2024-04-27 02:33:53.751044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.920 02:33:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.920 02:33:54 -- common/autotest_common.sh@850 -- # return 0 00:14:20.920 02:33:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:20.920 02:33:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:20.920 02:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.920 02:33:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.920 02:33:54 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.181 [2024-04-27 02:33:54.563171] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.181 02:33:54 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.181 02:33:54 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:21.181 02:33:54 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.441 02:33:54 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:21.441 02:33:54 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:21.702 02:33:55 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:21.702 02:33:55 -- target/nvmf_lvol.sh@29 -- # lvs=98cb9852-7880-43ae-8c3d-147b39b9c17f 00:14:21.702 02:33:55 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 98cb9852-7880-43ae-8c3d-147b39b9c17f lvol 20 00:14:21.963 02:33:55 -- target/nvmf_lvol.sh@32 -- # lvol=6fe24abf-25e5-40f4-9cdf-2e967d5844b6 00:14:21.963 02:33:55 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.963 02:33:55 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fe24abf-25e5-40f4-9cdf-2e967d5844b6 00:14:22.223 02:33:55 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.483 [2024-04-27 02:33:55.867107] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.483 02:33:55 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.483 02:33:56 -- target/nvmf_lvol.sh@42 -- # perf_pid=66610 00:14:22.483 02:33:56 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:22.483 02:33:56 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:22.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.868 02:33:57 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6fe24abf-25e5-40f4-9cdf-2e967d5844b6 MY_SNAPSHOT 00:14:23.868 02:33:57 -- target/nvmf_lvol.sh@47 -- # snapshot=529f8768-eeb1-4a98-8a14-b0e1b5488fb1 00:14:23.868 02:33:57 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6fe24abf-25e5-40f4-9cdf-2e967d5844b6 30 00:14:23.868 02:33:57 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 529f8768-eeb1-4a98-8a14-b0e1b5488fb1 MY_CLONE 00:14:24.130 02:33:57 -- target/nvmf_lvol.sh@49 -- # clone=6d8dc2c4-1431-4064-befc-719fe788a79d 00:14:24.130 02:33:57 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6d8dc2c4-1431-4064-befc-719fe788a79d 00:14:24.391 02:33:57 -- target/nvmf_lvol.sh@53 -- # wait 66610 00:14:34.577 Initializing NVMe Controllers 00:14:34.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:34.577 Controller IO queue size 128, less than required. 00:14:34.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:34.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:34.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:34.577 Initialization complete. Launching workers. 00:14:34.577 ======================================================== 00:14:34.577 Latency(us) 00:14:34.577 Device Information : IOPS MiB/s Average min max 00:14:34.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11398.70 44.53 11234.42 1073.83 56870.43 00:14:34.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11749.90 45.90 10894.92 892.42 90150.68 00:14:34.577 ======================================================== 00:14:34.577 Total : 23148.60 90.42 11062.09 892.42 90150.68 00:14:34.577 00:14:34.577 02:34:06 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:34.577 02:34:06 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6fe24abf-25e5-40f4-9cdf-2e967d5844b6 00:14:34.577 02:34:06 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 98cb9852-7880-43ae-8c3d-147b39b9c17f 00:14:34.577 02:34:06 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:34.577 02:34:06 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:34.577 02:34:06 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:34.577 02:34:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:34.577 02:34:06 -- nvmf/common.sh@117 -- # sync 00:14:34.577 02:34:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.577 02:34:06 -- nvmf/common.sh@120 -- # set +e 00:14:34.577 02:34:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.577 02:34:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.577 rmmod nvme_tcp 00:14:34.577 rmmod nvme_fabrics 00:14:34.577 rmmod nvme_keyring 00:14:34.577 02:34:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.577 02:34:06 -- nvmf/common.sh@124 -- # set -e 00:14:34.577 02:34:07 -- nvmf/common.sh@125 -- # return 0 00:14:34.577 02:34:07 -- nvmf/common.sh@478 -- # '[' -n 66203 ']' 00:14:34.577 02:34:07 -- nvmf/common.sh@479 -- # killprocess 66203 00:14:34.577 02:34:07 -- common/autotest_common.sh@936 -- # '[' -z 66203 ']' 00:14:34.577 02:34:07 -- common/autotest_common.sh@940 -- # kill -0 66203 00:14:34.577 02:34:07 -- common/autotest_common.sh@941 -- # uname 00:14:34.577 02:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.577 02:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66203 00:14:34.577 02:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.577 02:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.577 02:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66203' 00:14:34.577 killing process with pid 66203 00:14:34.577 02:34:07 -- common/autotest_common.sh@955 -- # kill 66203 00:14:34.577 02:34:07 -- common/autotest_common.sh@960 -- # wait 66203 00:14:34.577 02:34:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:34.577 02:34:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:34.577 02:34:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:34.578 02:34:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.578 02:34:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.578 02:34:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.578 02:34:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.578 02:34:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.965 02:34:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.965 00:14:35.965 real 0m22.771s 00:14:35.965 user 1m3.340s 00:14:35.965 sys 0m7.518s 00:14:35.965 02:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.965 02:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:35.965 ************************************ 00:14:35.965 END TEST nvmf_lvol 00:14:35.965 ************************************ 00:14:35.965 02:34:09 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:35.965 02:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:35.965 02:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.965 02:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:35.965 ************************************ 00:14:35.965 START TEST nvmf_lvs_grow 00:14:35.965 ************************************ 00:14:35.965 02:34:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:35.965 * Looking for test storage... 00:14:35.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.965 02:34:09 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.965 02:34:09 -- nvmf/common.sh@7 -- # uname -s 00:14:35.965 02:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.965 02:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.965 02:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.965 02:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.965 02:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.965 02:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.965 02:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.965 02:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.965 02:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.965 02:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.965 02:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.965 02:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.965 02:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.965 02:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.965 02:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.965 02:34:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.965 02:34:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.965 02:34:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.965 02:34:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.965 02:34:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.965 02:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.965 02:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.965 02:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.965 02:34:09 -- paths/export.sh@5 -- # export PATH 00:14:35.965 02:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.965 02:34:09 -- nvmf/common.sh@47 -- # : 0 00:14:35.965 02:34:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.965 02:34:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.965 02:34:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.965 02:34:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.965 02:34:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.965 02:34:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.965 02:34:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.965 02:34:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.965 02:34:09 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.965 02:34:09 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:35.965 02:34:09 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:35.965 02:34:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:35.965 02:34:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.965 02:34:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:35.965 02:34:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:35.965 02:34:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:35.965 02:34:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.965 02:34:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.965 02:34:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.965 02:34:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:35.965 02:34:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:35.965 02:34:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.965 02:34:09 -- common/autotest_common.sh@10 -- # set +x 00:14:42.560 02:34:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:42.560 02:34:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.560 02:34:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.560 02:34:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.560 02:34:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.560 02:34:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.561 02:34:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.561 02:34:16 -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.561 02:34:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.561 02:34:16 -- nvmf/common.sh@296 -- # e810=() 00:14:42.561 02:34:16 -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.561 02:34:16 -- nvmf/common.sh@297 -- # x722=() 00:14:42.561 02:34:16 -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.561 02:34:16 -- nvmf/common.sh@298 -- # mlx=() 00:14:42.561 02:34:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.561 02:34:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.561 02:34:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.561 02:34:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.561 02:34:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.561 02:34:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.561 02:34:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:42.561 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:42.561 02:34:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.561 02:34:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:42.561 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:42.561 02:34:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.561 02:34:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.561 02:34:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.561 02:34:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:42.561 02:34:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.561 02:34:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:42.561 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:42.561 02:34:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.561 02:34:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.561 02:34:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.561 02:34:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:42.561 02:34:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.561 02:34:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:42.561 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:42.561 02:34:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.561 02:34:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:42.561 02:34:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:42.561 02:34:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:42.561 02:34:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:42.561 02:34:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.561 02:34:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.561 02:34:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.561 02:34:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.561 02:34:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.561 02:34:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.561 02:34:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.561 02:34:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.561 02:34:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.561 02:34:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.561 02:34:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.561 02:34:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.561 02:34:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.823 02:34:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.823 02:34:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.823 02:34:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.823 02:34:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.823 02:34:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.823 02:34:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.823 02:34:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:14:42.823 00:14:42.823 --- 10.0.0.2 ping statistics --- 00:14:42.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.823 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:14:42.823 02:34:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:14:43.084 00:14:43.084 --- 10.0.0.1 ping statistics --- 00:14:43.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.084 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:14:43.084 02:34:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.084 02:34:16 -- nvmf/common.sh@411 -- # return 0 00:14:43.084 02:34:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:43.084 02:34:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.084 02:34:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:43.084 02:34:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:43.084 02:34:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.084 02:34:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:43.084 02:34:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:43.084 02:34:16 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:43.084 02:34:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:43.084 02:34:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:43.084 02:34:16 -- common/autotest_common.sh@10 -- # set +x 00:14:43.084 02:34:16 -- nvmf/common.sh@470 -- # nvmfpid=72962 00:14:43.084 02:34:16 -- nvmf/common.sh@471 -- # waitforlisten 72962 00:14:43.084 02:34:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:43.084 02:34:16 -- common/autotest_common.sh@817 -- # '[' -z 72962 ']' 00:14:43.084 02:34:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.084 02:34:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.084 02:34:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.084 02:34:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.084 02:34:16 -- common/autotest_common.sh@10 -- # set +x 00:14:43.084 [2024-04-27 02:34:16.541649] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:43.084 [2024-04-27 02:34:16.541738] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.084 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.084 [2024-04-27 02:34:16.615776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.084 [2024-04-27 02:34:16.687388] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.084 [2024-04-27 02:34:16.687425] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.084 [2024-04-27 02:34:16.687433] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.084 [2024-04-27 02:34:16.687439] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.084 [2024-04-27 02:34:16.687445] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.084 [2024-04-27 02:34:16.687465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.028 02:34:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:44.028 02:34:17 -- common/autotest_common.sh@850 -- # return 0 00:14:44.028 02:34:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:44.028 02:34:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:44.028 02:34:17 -- common/autotest_common.sh@10 -- # set +x 00:14:44.028 02:34:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:44.028 [2024-04-27 02:34:17.482912] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:44.028 02:34:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:44.028 02:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:44.028 02:34:17 -- common/autotest_common.sh@10 -- # set +x 00:14:44.028 ************************************ 00:14:44.028 START TEST lvs_grow_clean 00:14:44.028 ************************************ 00:14:44.028 02:34:17 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.028 02:34:17 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.289 02:34:17 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:44.289 02:34:17 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:44.550 02:34:17 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:44.550 02:34:17 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:44.550 02:34:17 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:44.550 02:34:18 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:44.550 02:34:18 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:44.550 02:34:18 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ba6716c9-f0f3-445f-9f5b-f72970781fed lvol 150 00:14:44.811 02:34:18 -- target/nvmf_lvs_grow.sh@33 -- # lvol=649d16a7-f6d6-4bb9-ae35-2f8125b61cfe 00:14:44.811 02:34:18 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.811 02:34:18 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:45.071 [2024-04-27 02:34:18.448766] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:45.071 [2024-04-27 02:34:18.448818] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:45.071 true 00:14:45.071 02:34:18 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:45.071 02:34:18 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:45.071 02:34:18 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:45.071 02:34:18 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:45.332 02:34:18 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 649d16a7-f6d6-4bb9-ae35-2f8125b61cfe 00:14:45.332 02:34:18 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.593 [2024-04-27 02:34:19.038561] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.593 02:34:19 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.593 02:34:19 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73464 00:14:45.593 02:34:19 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.593 02:34:19 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:45.593 02:34:19 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73464 /var/tmp/bdevperf.sock 00:14:45.593 02:34:19 -- common/autotest_common.sh@817 -- # '[' -z 73464 ']' 00:14:45.593 02:34:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.593 02:34:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.593 02:34:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.593 02:34:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.593 02:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.854 [2024-04-27 02:34:19.234155] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:45.854 [2024-04-27 02:34:19.234203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73464 ] 00:14:45.854 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.854 [2024-04-27 02:34:19.292247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.854 [2024-04-27 02:34:19.354491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.425 02:34:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.425 02:34:19 -- common/autotest_common.sh@850 -- # return 0 00:14:46.425 02:34:19 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:46.685 Nvme0n1 00:14:46.685 02:34:20 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:46.945 [ 00:14:46.945 { 00:14:46.945 "name": "Nvme0n1", 00:14:46.945 "aliases": [ 00:14:46.945 "649d16a7-f6d6-4bb9-ae35-2f8125b61cfe" 00:14:46.945 ], 00:14:46.945 "product_name": "NVMe disk", 00:14:46.946 "block_size": 4096, 00:14:46.946 "num_blocks": 38912, 00:14:46.946 "uuid": "649d16a7-f6d6-4bb9-ae35-2f8125b61cfe", 00:14:46.946 "assigned_rate_limits": { 00:14:46.946 "rw_ios_per_sec": 0, 00:14:46.946 "rw_mbytes_per_sec": 0, 00:14:46.946 "r_mbytes_per_sec": 0, 00:14:46.946 "w_mbytes_per_sec": 0 00:14:46.946 }, 00:14:46.946 "claimed": false, 00:14:46.946 "zoned": false, 00:14:46.946 "supported_io_types": { 00:14:46.946 "read": true, 00:14:46.946 "write": true, 00:14:46.946 "unmap": true, 00:14:46.946 "write_zeroes": true, 00:14:46.946 "flush": true, 00:14:46.946 "reset": true, 00:14:46.946 "compare": true, 00:14:46.946 "compare_and_write": true, 00:14:46.946 "abort": true, 00:14:46.946 "nvme_admin": true, 00:14:46.946 "nvme_io": true 00:14:46.946 }, 00:14:46.946 "memory_domains": [ 00:14:46.946 { 00:14:46.946 "dma_device_id": "system", 00:14:46.946 "dma_device_type": 1 00:14:46.946 } 00:14:46.946 ], 00:14:46.946 "driver_specific": { 00:14:46.946 "nvme": [ 00:14:46.946 { 00:14:46.946 "trid": { 00:14:46.946 "trtype": "TCP", 00:14:46.946 "adrfam": "IPv4", 00:14:46.946 "traddr": "10.0.0.2", 00:14:46.946 "trsvcid": "4420", 00:14:46.946 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:46.946 }, 00:14:46.946 "ctrlr_data": { 00:14:46.946 "cntlid": 1, 00:14:46.946 "vendor_id": "0x8086", 00:14:46.946 "model_number": "SPDK bdev Controller", 00:14:46.946 "serial_number": "SPDK0", 00:14:46.946 "firmware_revision": "24.05", 00:14:46.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.946 "oacs": { 00:14:46.946 "security": 0, 00:14:46.946 "format": 0, 00:14:46.946 "firmware": 0, 00:14:46.946 "ns_manage": 0 00:14:46.946 }, 00:14:46.946 "multi_ctrlr": true, 00:14:46.946 "ana_reporting": false 00:14:46.946 }, 00:14:46.946 "vs": { 00:14:46.946 "nvme_version": "1.3" 00:14:46.946 }, 00:14:46.946 "ns_data": { 00:14:46.946 "id": 1, 00:14:46.946 "can_share": true 00:14:46.946 } 00:14:46.946 } 00:14:46.946 ], 00:14:46.946 "mp_policy": "active_passive" 00:14:46.946 } 00:14:46.946 } 00:14:46.946 ] 00:14:46.946 02:34:20 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73693 00:14:46.946 02:34:20 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:46.946 02:34:20 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.946 Running I/O for 10 seconds... 00:14:47.901 Latency(us) 00:14:47.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.901 Nvme0n1 : 1.00 17095.00 66.78 0.00 0.00 0.00 0.00 0.00 00:14:47.901 =================================================================================================================== 00:14:47.901 Total : 17095.00 66.78 0.00 0.00 0.00 0.00 0.00 00:14:47.901 00:14:48.844 02:34:22 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:49.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.105 Nvme0n1 : 2.00 17219.50 67.26 0.00 0.00 0.00 0.00 0.00 00:14:49.105 =================================================================================================================== 00:14:49.105 Total : 17219.50 67.26 0.00 0.00 0.00 0.00 0.00 00:14:49.105 00:14:49.105 true 00:14:49.105 02:34:22 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:49.105 02:34:22 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:49.105 02:34:22 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:49.105 02:34:22 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:49.105 02:34:22 -- target/nvmf_lvs_grow.sh@65 -- # wait 73693 00:14:50.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.047 Nvme0n1 : 3.00 17269.00 67.46 0.00 0.00 0.00 0.00 0.00 00:14:50.047 =================================================================================================================== 00:14:50.047 Total : 17269.00 67.46 0.00 0.00 0.00 0.00 0.00 00:14:50.047 00:14:50.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.990 Nvme0n1 : 4.00 17303.75 67.59 0.00 0.00 0.00 0.00 0.00 00:14:50.990 =================================================================================================================== 00:14:50.990 Total : 17303.75 67.59 0.00 0.00 0.00 0.00 0.00 00:14:50.990 00:14:51.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.934 Nvme0n1 : 5.00 17334.20 67.71 0.00 0.00 0.00 0.00 0.00 00:14:51.934 =================================================================================================================== 00:14:51.934 Total : 17334.20 67.71 0.00 0.00 0.00 0.00 0.00 00:14:51.934 00:14:53.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.321 Nvme0n1 : 6.00 17354.50 67.79 0.00 0.00 0.00 0.00 0.00 00:14:53.321 =================================================================================================================== 00:14:53.321 Total : 17354.50 67.79 0.00 0.00 0.00 0.00 0.00 00:14:53.321 00:14:53.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.892 Nvme0n1 : 7.00 17374.71 67.87 0.00 0.00 0.00 0.00 0.00 00:14:53.892 =================================================================================================================== 00:14:53.892 Total : 17374.71 67.87 0.00 0.00 0.00 0.00 0.00 00:14:53.892 00:14:55.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.279 Nvme0n1 : 8.00 17388.88 67.93 0.00 0.00 0.00 0.00 0.00 00:14:55.279 =================================================================================================================== 00:14:55.279 Total : 17388.88 67.93 0.00 0.00 0.00 0.00 0.00 00:14:55.279 00:14:56.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.223 Nvme0n1 : 9.00 17402.56 67.98 0.00 0.00 0.00 0.00 0.00 00:14:56.223 =================================================================================================================== 00:14:56.223 Total : 17402.56 67.98 0.00 0.00 0.00 0.00 0.00 00:14:56.223 00:14:57.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.167 Nvme0n1 : 10.00 17410.30 68.01 0.00 0.00 0.00 0.00 0.00 00:14:57.167 =================================================================================================================== 00:14:57.167 Total : 17410.30 68.01 0.00 0.00 0.00 0.00 0.00 00:14:57.167 00:14:57.167 00:14:57.167 Latency(us) 00:14:57.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.167 Nvme0n1 : 10.01 17409.99 68.01 0.00 0.00 7346.27 4805.97 17148.59 00:14:57.167 =================================================================================================================== 00:14:57.167 Total : 17409.99 68.01 0.00 0.00 7346.27 4805.97 17148.59 00:14:57.167 0 00:14:57.167 02:34:30 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73464 00:14:57.167 02:34:30 -- common/autotest_common.sh@936 -- # '[' -z 73464 ']' 00:14:57.167 02:34:30 -- common/autotest_common.sh@940 -- # kill -0 73464 00:14:57.168 02:34:30 -- common/autotest_common.sh@941 -- # uname 00:14:57.168 02:34:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.168 02:34:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73464 00:14:57.168 02:34:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:57.168 02:34:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:57.168 02:34:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73464' 00:14:57.168 killing process with pid 73464 00:14:57.168 02:34:30 -- common/autotest_common.sh@955 -- # kill 73464 00:14:57.168 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.168 00:14:57.168 Latency(us) 00:14:57.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.168 =================================================================================================================== 00:14:57.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.168 02:34:30 -- common/autotest_common.sh@960 -- # wait 73464 00:14:57.168 02:34:30 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:57.429 02:34:30 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:57.429 02:34:30 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:57.690 02:34:31 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:57.690 02:34:31 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:57.690 02:34:31 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:57.690 [2024-04-27 02:34:31.202393] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:57.690 02:34:31 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:57.690 02:34:31 -- common/autotest_common.sh@638 -- # local es=0 00:14:57.690 02:34:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:57.690 02:34:31 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.690 02:34:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.690 02:34:31 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.690 02:34:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.690 02:34:31 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.690 02:34:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:57.690 02:34:31 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.690 02:34:31 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:57.690 02:34:31 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:57.952 request: 00:14:57.952 { 00:14:57.952 "uuid": "ba6716c9-f0f3-445f-9f5b-f72970781fed", 00:14:57.952 "method": "bdev_lvol_get_lvstores", 00:14:57.952 "req_id": 1 00:14:57.952 } 00:14:57.952 Got JSON-RPC error response 00:14:57.952 response: 00:14:57.952 { 00:14:57.952 "code": -19, 00:14:57.952 "message": "No such device" 00:14:57.952 } 00:14:57.952 02:34:31 -- common/autotest_common.sh@641 -- # es=1 00:14:57.952 02:34:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:57.952 02:34:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:57.952 02:34:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:57.952 02:34:31 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.952 aio_bdev 00:14:58.212 02:34:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 649d16a7-f6d6-4bb9-ae35-2f8125b61cfe 00:14:58.212 02:34:31 -- common/autotest_common.sh@885 -- # local bdev_name=649d16a7-f6d6-4bb9-ae35-2f8125b61cfe 00:14:58.212 02:34:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:58.212 02:34:31 -- common/autotest_common.sh@887 -- # local i 00:14:58.212 02:34:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:58.212 02:34:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:58.212 02:34:31 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:58.212 02:34:31 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 649d16a7-f6d6-4bb9-ae35-2f8125b61cfe -t 2000 00:14:58.473 [ 00:14:58.473 { 00:14:58.473 "name": "649d16a7-f6d6-4bb9-ae35-2f8125b61cfe", 00:14:58.473 "aliases": [ 00:14:58.473 "lvs/lvol" 00:14:58.473 ], 00:14:58.473 "product_name": "Logical Volume", 00:14:58.473 "block_size": 4096, 00:14:58.473 "num_blocks": 38912, 00:14:58.473 "uuid": "649d16a7-f6d6-4bb9-ae35-2f8125b61cfe", 00:14:58.473 "assigned_rate_limits": { 00:14:58.473 "rw_ios_per_sec": 0, 00:14:58.473 "rw_mbytes_per_sec": 0, 00:14:58.473 "r_mbytes_per_sec": 0, 00:14:58.473 "w_mbytes_per_sec": 0 00:14:58.473 }, 00:14:58.473 "claimed": false, 00:14:58.473 "zoned": false, 00:14:58.473 "supported_io_types": { 00:14:58.473 "read": true, 00:14:58.473 "write": true, 00:14:58.473 "unmap": true, 00:14:58.473 "write_zeroes": true, 00:14:58.473 "flush": false, 00:14:58.473 "reset": true, 00:14:58.473 "compare": false, 00:14:58.473 "compare_and_write": false, 00:14:58.473 "abort": false, 00:14:58.473 "nvme_admin": false, 00:14:58.473 "nvme_io": false 00:14:58.473 }, 00:14:58.473 "driver_specific": { 00:14:58.473 "lvol": { 00:14:58.473 "lvol_store_uuid": "ba6716c9-f0f3-445f-9f5b-f72970781fed", 00:14:58.473 "base_bdev": "aio_bdev", 00:14:58.473 "thin_provision": false, 00:14:58.473 "snapshot": false, 00:14:58.473 "clone": false, 00:14:58.473 "esnap_clone": false 00:14:58.473 } 00:14:58.473 } 00:14:58.473 } 00:14:58.473 ] 00:14:58.473 02:34:31 -- common/autotest_common.sh@893 -- # return 0 00:14:58.473 02:34:31 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:58.473 02:34:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:58.473 02:34:32 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:58.473 02:34:32 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:58.473 02:34:32 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:58.733 02:34:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:58.733 02:34:32 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 649d16a7-f6d6-4bb9-ae35-2f8125b61cfe 00:14:58.733 02:34:32 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba6716c9-f0f3-445f-9f5b-f72970781fed 00:14:58.994 02:34:32 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:59.256 02:34:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.256 00:14:59.256 real 0m15.050s 00:14:59.256 user 0m14.715s 00:14:59.256 sys 0m1.286s 00:14:59.256 02:34:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.256 02:34:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.256 ************************************ 00:14:59.256 END TEST lvs_grow_clean 00:14:59.256 ************************************ 00:14:59.256 02:34:32 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:59.256 02:34:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:59.256 02:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.256 02:34:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.519 ************************************ 00:14:59.519 START TEST lvs_grow_dirty 00:14:59.519 ************************************ 00:14:59.519 02:34:32 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.519 02:34:32 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.519 02:34:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:59.519 02:34:33 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:59.780 02:34:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d4d23176-fa91-4020-986e-d7a6e7c86271 00:14:59.780 02:34:33 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:14:59.780 02:34:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:59.780 02:34:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:59.780 02:34:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:59.780 02:34:33 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d4d23176-fa91-4020-986e-d7a6e7c86271 lvol 150 00:15:00.040 02:34:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:00.040 02:34:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:00.040 02:34:33 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:00.300 [2024-04-27 02:34:33.689300] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:00.300 [2024-04-27 02:34:33.689352] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:00.300 true 00:15:00.300 02:34:33 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:00.300 02:34:33 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:00.300 02:34:33 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:00.300 02:34:33 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:00.560 02:34:34 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:00.820 02:34:34 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:00.820 02:34:34 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.081 02:34:34 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76476 00:15:01.081 02:34:34 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.081 02:34:34 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:01.081 02:34:34 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76476 /var/tmp/bdevperf.sock 00:15:01.081 02:34:34 -- common/autotest_common.sh@817 -- # '[' -z 76476 ']' 00:15:01.081 02:34:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.081 02:34:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.081 02:34:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.081 02:34:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.081 02:34:34 -- common/autotest_common.sh@10 -- # set +x 00:15:01.081 [2024-04-27 02:34:34.516569] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:01.081 [2024-04-27 02:34:34.516619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76476 ] 00:15:01.081 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.081 [2024-04-27 02:34:34.574480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.081 [2024-04-27 02:34:34.637019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.023 02:34:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.023 02:34:35 -- common/autotest_common.sh@850 -- # return 0 00:15:02.023 02:34:35 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:02.023 Nvme0n1 00:15:02.023 02:34:35 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:02.283 [ 00:15:02.283 { 00:15:02.283 "name": "Nvme0n1", 00:15:02.283 "aliases": [ 00:15:02.283 "d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a" 00:15:02.283 ], 00:15:02.283 "product_name": "NVMe disk", 00:15:02.283 "block_size": 4096, 00:15:02.283 "num_blocks": 38912, 00:15:02.283 "uuid": "d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a", 00:15:02.284 "assigned_rate_limits": { 00:15:02.284 "rw_ios_per_sec": 0, 00:15:02.284 "rw_mbytes_per_sec": 0, 00:15:02.284 "r_mbytes_per_sec": 0, 00:15:02.284 "w_mbytes_per_sec": 0 00:15:02.284 }, 00:15:02.284 "claimed": false, 00:15:02.284 "zoned": false, 00:15:02.284 "supported_io_types": { 00:15:02.284 "read": true, 00:15:02.284 "write": true, 00:15:02.284 "unmap": true, 00:15:02.284 "write_zeroes": true, 00:15:02.284 "flush": true, 00:15:02.284 "reset": true, 00:15:02.284 "compare": true, 00:15:02.284 "compare_and_write": true, 00:15:02.284 "abort": true, 00:15:02.284 "nvme_admin": true, 00:15:02.284 "nvme_io": true 00:15:02.284 }, 00:15:02.284 "memory_domains": [ 00:15:02.284 { 00:15:02.284 "dma_device_id": "system", 00:15:02.284 "dma_device_type": 1 00:15:02.284 } 00:15:02.284 ], 00:15:02.284 "driver_specific": { 00:15:02.284 "nvme": [ 00:15:02.284 { 00:15:02.284 "trid": { 00:15:02.284 "trtype": "TCP", 00:15:02.284 "adrfam": "IPv4", 00:15:02.284 "traddr": "10.0.0.2", 00:15:02.284 "trsvcid": "4420", 00:15:02.284 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:02.284 }, 00:15:02.284 "ctrlr_data": { 00:15:02.284 "cntlid": 1, 00:15:02.284 "vendor_id": "0x8086", 00:15:02.284 "model_number": "SPDK bdev Controller", 00:15:02.284 "serial_number": "SPDK0", 00:15:02.284 "firmware_revision": "24.05", 00:15:02.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:02.284 "oacs": { 00:15:02.284 "security": 0, 00:15:02.284 "format": 0, 00:15:02.284 "firmware": 0, 00:15:02.284 "ns_manage": 0 00:15:02.284 }, 00:15:02.284 "multi_ctrlr": true, 00:15:02.284 "ana_reporting": false 00:15:02.284 }, 00:15:02.284 "vs": { 00:15:02.284 "nvme_version": "1.3" 00:15:02.284 }, 00:15:02.284 "ns_data": { 00:15:02.284 "id": 1, 00:15:02.284 "can_share": true 00:15:02.284 } 00:15:02.284 } 00:15:02.284 ], 00:15:02.284 "mp_policy": "active_passive" 00:15:02.284 } 00:15:02.284 } 00:15:02.284 ] 00:15:02.284 02:34:35 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76794 00:15:02.284 02:34:35 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:02.284 02:34:35 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.284 Running I/O for 10 seconds... 00:15:03.675 Latency(us) 00:15:03.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.675 Nvme0n1 : 1.00 17391.00 67.93 0.00 0.00 0.00 0.00 0.00 00:15:03.675 =================================================================================================================== 00:15:03.675 Total : 17391.00 67.93 0.00 0.00 0.00 0.00 0.00 00:15:03.675 00:15:04.317 02:34:37 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:04.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.317 Nvme0n1 : 2.00 17623.00 68.84 0.00 0.00 0.00 0.00 0.00 00:15:04.317 =================================================================================================================== 00:15:04.317 Total : 17623.00 68.84 0.00 0.00 0.00 0.00 0.00 00:15:04.317 00:15:04.317 true 00:15:04.579 02:34:37 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:04.579 02:34:37 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:04.579 02:34:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:04.579 02:34:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:04.579 02:34:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 76794 00:15:05.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.523 Nvme0n1 : 3.00 17700.67 69.14 0.00 0.00 0.00 0.00 0.00 00:15:05.523 =================================================================================================================== 00:15:05.523 Total : 17700.67 69.14 0.00 0.00 0.00 0.00 0.00 00:15:05.523 00:15:06.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.468 Nvme0n1 : 4.00 17755.50 69.36 0.00 0.00 0.00 0.00 0.00 00:15:06.468 =================================================================================================================== 00:15:06.468 Total : 17755.50 69.36 0.00 0.00 0.00 0.00 0.00 00:15:06.468 00:15:07.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.413 Nvme0n1 : 5.00 17775.60 69.44 0.00 0.00 0.00 0.00 0.00 00:15:07.413 =================================================================================================================== 00:15:07.413 Total : 17775.60 69.44 0.00 0.00 0.00 0.00 0.00 00:15:07.413 00:15:08.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.356 Nvme0n1 : 6.00 17799.67 69.53 0.00 0.00 0.00 0.00 0.00 00:15:08.356 =================================================================================================================== 00:15:08.356 Total : 17799.67 69.53 0.00 0.00 0.00 0.00 0.00 00:15:08.356 00:15:09.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.300 Nvme0n1 : 7.00 17826.00 69.63 0.00 0.00 0.00 0.00 0.00 00:15:09.300 =================================================================================================================== 00:15:09.300 Total : 17826.00 69.63 0.00 0.00 0.00 0.00 0.00 00:15:09.300 00:15:10.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.683 Nvme0n1 : 8.00 17837.75 69.68 0.00 0.00 0.00 0.00 0.00 00:15:10.683 =================================================================================================================== 00:15:10.683 Total : 17837.75 69.68 0.00 0.00 0.00 0.00 0.00 00:15:10.683 00:15:11.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.626 Nvme0n1 : 9.00 17854.00 69.74 0.00 0.00 0.00 0.00 0.00 00:15:11.626 =================================================================================================================== 00:15:11.626 Total : 17854.00 69.74 0.00 0.00 0.00 0.00 0.00 00:15:11.626 00:15:12.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.568 Nvme0n1 : 10.00 17862.30 69.77 0.00 0.00 0.00 0.00 0.00 00:15:12.568 =================================================================================================================== 00:15:12.568 Total : 17862.30 69.77 0.00 0.00 0.00 0.00 0.00 00:15:12.568 00:15:12.568 00:15:12.568 Latency(us) 00:15:12.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.568 Nvme0n1 : 10.01 17866.18 69.79 0.00 0.00 7159.42 4860.59 22282.24 00:15:12.568 =================================================================================================================== 00:15:12.568 Total : 17866.18 69.79 0.00 0.00 7159.42 4860.59 22282.24 00:15:12.568 0 00:15:12.568 02:34:45 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76476 00:15:12.568 02:34:45 -- common/autotest_common.sh@936 -- # '[' -z 76476 ']' 00:15:12.568 02:34:45 -- common/autotest_common.sh@940 -- # kill -0 76476 00:15:12.568 02:34:45 -- common/autotest_common.sh@941 -- # uname 00:15:12.568 02:34:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.568 02:34:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76476 00:15:12.568 02:34:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.568 02:34:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.568 02:34:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76476' 00:15:12.568 killing process with pid 76476 00:15:12.568 02:34:45 -- common/autotest_common.sh@955 -- # kill 76476 00:15:12.568 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.568 00:15:12.568 Latency(us) 00:15:12.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.568 =================================================================================================================== 00:15:12.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.568 02:34:45 -- common/autotest_common.sh@960 -- # wait 76476 00:15:12.568 02:34:46 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:12.830 02:34:46 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:12.830 02:34:46 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:12.830 02:34:46 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:12.830 02:34:46 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:12.830 02:34:46 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72962 00:15:12.830 02:34:46 -- target/nvmf_lvs_grow.sh@74 -- # wait 72962 00:15:13.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72962 Killed "${NVMF_APP[@]}" "$@" 00:15:13.092 02:34:46 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:13.092 02:34:46 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:13.092 02:34:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:13.092 02:34:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.092 02:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:13.092 02:34:46 -- nvmf/common.sh@470 -- # nvmfpid=78826 00:15:13.092 02:34:46 -- nvmf/common.sh@471 -- # waitforlisten 78826 00:15:13.092 02:34:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:13.092 02:34:46 -- common/autotest_common.sh@817 -- # '[' -z 78826 ']' 00:15:13.092 02:34:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.092 02:34:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.092 02:34:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.092 02:34:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.092 02:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:13.092 [2024-04-27 02:34:46.540686] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:13.092 [2024-04-27 02:34:46.540739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.092 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.092 [2024-04-27 02:34:46.606354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.092 [2024-04-27 02:34:46.669012] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.092 [2024-04-27 02:34:46.669047] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.092 [2024-04-27 02:34:46.669055] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.092 [2024-04-27 02:34:46.669061] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.092 [2024-04-27 02:34:46.669067] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.092 [2024-04-27 02:34:46.669084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.035 02:34:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.035 02:34:47 -- common/autotest_common.sh@850 -- # return 0 00:15:14.035 02:34:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:14.035 02:34:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:14.035 02:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.035 02:34:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.035 02:34:47 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:14.035 [2024-04-27 02:34:47.470379] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:14.035 [2024-04-27 02:34:47.470469] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:14.035 [2024-04-27 02:34:47.470499] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:14.035 02:34:47 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:14.035 02:34:47 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:14.035 02:34:47 -- common/autotest_common.sh@885 -- # local bdev_name=d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:14.035 02:34:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:14.035 02:34:47 -- common/autotest_common.sh@887 -- # local i 00:15:14.035 02:34:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:14.035 02:34:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:14.035 02:34:47 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:14.036 02:34:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a -t 2000 00:15:14.297 [ 00:15:14.297 { 00:15:14.297 "name": "d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a", 00:15:14.297 "aliases": [ 00:15:14.297 "lvs/lvol" 00:15:14.297 ], 00:15:14.297 "product_name": "Logical Volume", 00:15:14.297 "block_size": 4096, 00:15:14.297 "num_blocks": 38912, 00:15:14.297 "uuid": "d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a", 00:15:14.297 "assigned_rate_limits": { 00:15:14.297 "rw_ios_per_sec": 0, 00:15:14.297 "rw_mbytes_per_sec": 0, 00:15:14.297 "r_mbytes_per_sec": 0, 00:15:14.297 "w_mbytes_per_sec": 0 00:15:14.297 }, 00:15:14.297 "claimed": false, 00:15:14.297 "zoned": false, 00:15:14.297 "supported_io_types": { 00:15:14.297 "read": true, 00:15:14.297 "write": true, 00:15:14.297 "unmap": true, 00:15:14.297 "write_zeroes": true, 00:15:14.297 "flush": false, 00:15:14.297 "reset": true, 00:15:14.297 "compare": false, 00:15:14.297 "compare_and_write": false, 00:15:14.297 "abort": false, 00:15:14.297 "nvme_admin": false, 00:15:14.297 "nvme_io": false 00:15:14.297 }, 00:15:14.297 "driver_specific": { 00:15:14.297 "lvol": { 00:15:14.297 "lvol_store_uuid": "d4d23176-fa91-4020-986e-d7a6e7c86271", 00:15:14.297 "base_bdev": "aio_bdev", 00:15:14.297 "thin_provision": false, 00:15:14.297 "snapshot": false, 00:15:14.297 "clone": false, 00:15:14.297 "esnap_clone": false 00:15:14.297 } 00:15:14.297 } 00:15:14.297 } 00:15:14.297 ] 00:15:14.297 02:34:47 -- common/autotest_common.sh@893 -- # return 0 00:15:14.297 02:34:47 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:14.297 02:34:47 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:14.557 02:34:47 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:14.557 02:34:47 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:14.557 02:34:47 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:14.557 02:34:48 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:14.557 02:34:48 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:14.818 [2024-04-27 02:34:48.254341] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:14.818 02:34:48 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:14.818 02:34:48 -- common/autotest_common.sh@638 -- # local es=0 00:15:14.818 02:34:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:14.818 02:34:48 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.818 02:34:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.818 02:34:48 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.818 02:34:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.818 02:34:48 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.818 02:34:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.818 02:34:48 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.818 02:34:48 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:14.818 02:34:48 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:15.080 request: 00:15:15.080 { 00:15:15.080 "uuid": "d4d23176-fa91-4020-986e-d7a6e7c86271", 00:15:15.080 "method": "bdev_lvol_get_lvstores", 00:15:15.080 "req_id": 1 00:15:15.080 } 00:15:15.080 Got JSON-RPC error response 00:15:15.080 response: 00:15:15.080 { 00:15:15.080 "code": -19, 00:15:15.080 "message": "No such device" 00:15:15.080 } 00:15:15.080 02:34:48 -- common/autotest_common.sh@641 -- # es=1 00:15:15.080 02:34:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:15.080 02:34:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:15.080 02:34:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:15.080 02:34:48 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:15.080 aio_bdev 00:15:15.080 02:34:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:15.080 02:34:48 -- common/autotest_common.sh@885 -- # local bdev_name=d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:15.080 02:34:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:15.080 02:34:48 -- common/autotest_common.sh@887 -- # local i 00:15:15.080 02:34:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:15.080 02:34:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:15.080 02:34:48 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:15.340 02:34:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a -t 2000 00:15:15.340 [ 00:15:15.340 { 00:15:15.340 "name": "d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a", 00:15:15.340 "aliases": [ 00:15:15.340 "lvs/lvol" 00:15:15.340 ], 00:15:15.340 "product_name": "Logical Volume", 00:15:15.340 "block_size": 4096, 00:15:15.340 "num_blocks": 38912, 00:15:15.341 "uuid": "d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a", 00:15:15.341 "assigned_rate_limits": { 00:15:15.341 "rw_ios_per_sec": 0, 00:15:15.341 "rw_mbytes_per_sec": 0, 00:15:15.341 "r_mbytes_per_sec": 0, 00:15:15.341 "w_mbytes_per_sec": 0 00:15:15.341 }, 00:15:15.341 "claimed": false, 00:15:15.341 "zoned": false, 00:15:15.341 "supported_io_types": { 00:15:15.341 "read": true, 00:15:15.341 "write": true, 00:15:15.341 "unmap": true, 00:15:15.341 "write_zeroes": true, 00:15:15.341 "flush": false, 00:15:15.341 "reset": true, 00:15:15.341 "compare": false, 00:15:15.341 "compare_and_write": false, 00:15:15.341 "abort": false, 00:15:15.341 "nvme_admin": false, 00:15:15.341 "nvme_io": false 00:15:15.341 }, 00:15:15.341 "driver_specific": { 00:15:15.341 "lvol": { 00:15:15.341 "lvol_store_uuid": "d4d23176-fa91-4020-986e-d7a6e7c86271", 00:15:15.341 "base_bdev": "aio_bdev", 00:15:15.341 "thin_provision": false, 00:15:15.341 "snapshot": false, 00:15:15.341 "clone": false, 00:15:15.341 "esnap_clone": false 00:15:15.341 } 00:15:15.341 } 00:15:15.341 } 00:15:15.341 ] 00:15:15.341 02:34:48 -- common/autotest_common.sh@893 -- # return 0 00:15:15.341 02:34:48 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:15.341 02:34:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:15.602 02:34:49 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:15.602 02:34:49 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:15.602 02:34:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:15.602 02:34:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:15.862 02:34:49 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d9d12cb9-bc93-4c0a-b7aa-4660cf83f51a 00:15:15.862 02:34:49 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4d23176-fa91-4020-986e-d7a6e7c86271 00:15:16.123 02:34:49 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:16.123 02:34:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:16.123 00:15:16.123 real 0m16.836s 00:15:16.123 user 0m44.150s 00:15:16.123 sys 0m2.844s 00:15:16.123 02:34:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.123 02:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.123 ************************************ 00:15:16.123 END TEST lvs_grow_dirty 00:15:16.123 ************************************ 00:15:16.385 02:34:49 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:16.385 02:34:49 -- common/autotest_common.sh@794 -- # type=--id 00:15:16.385 02:34:49 -- common/autotest_common.sh@795 -- # id=0 00:15:16.385 02:34:49 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:16.385 02:34:49 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:16.385 02:34:49 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:16.385 02:34:49 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:16.385 02:34:49 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:16.385 02:34:49 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:16.385 nvmf_trace.0 00:15:16.385 02:34:49 -- common/autotest_common.sh@809 -- # return 0 00:15:16.385 02:34:49 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:16.385 02:34:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:16.385 02:34:49 -- nvmf/common.sh@117 -- # sync 00:15:16.385 02:34:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.385 02:34:49 -- nvmf/common.sh@120 -- # set +e 00:15:16.385 02:34:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.385 02:34:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.385 rmmod nvme_tcp 00:15:16.385 rmmod nvme_fabrics 00:15:16.385 rmmod nvme_keyring 00:15:16.385 02:34:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.385 02:34:49 -- nvmf/common.sh@124 -- # set -e 00:15:16.385 02:34:49 -- nvmf/common.sh@125 -- # return 0 00:15:16.385 02:34:49 -- nvmf/common.sh@478 -- # '[' -n 78826 ']' 00:15:16.385 02:34:49 -- nvmf/common.sh@479 -- # killprocess 78826 00:15:16.385 02:34:49 -- common/autotest_common.sh@936 -- # '[' -z 78826 ']' 00:15:16.385 02:34:49 -- common/autotest_common.sh@940 -- # kill -0 78826 00:15:16.385 02:34:49 -- common/autotest_common.sh@941 -- # uname 00:15:16.385 02:34:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.385 02:34:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78826 00:15:16.385 02:34:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:16.385 02:34:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:16.385 02:34:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78826' 00:15:16.385 killing process with pid 78826 00:15:16.385 02:34:49 -- common/autotest_common.sh@955 -- # kill 78826 00:15:16.385 02:34:49 -- common/autotest_common.sh@960 -- # wait 78826 00:15:16.646 02:34:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:16.646 02:34:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:16.646 02:34:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:16.646 02:34:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.646 02:34:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.646 02:34:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.646 02:34:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.646 02:34:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.561 02:34:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.561 00:15:18.561 real 0m42.657s 00:15:18.561 user 1m4.765s 00:15:18.561 sys 0m9.735s 00:15:18.561 02:34:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:18.561 02:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.561 ************************************ 00:15:18.561 END TEST nvmf_lvs_grow 00:15:18.561 ************************************ 00:15:18.561 02:34:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:18.561 02:34:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:18.561 02:34:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.561 02:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.822 ************************************ 00:15:18.822 START TEST nvmf_bdev_io_wait 00:15:18.822 ************************************ 00:15:18.822 02:34:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:18.822 * Looking for test storage... 00:15:18.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.822 02:34:52 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.822 02:34:52 -- nvmf/common.sh@7 -- # uname -s 00:15:18.822 02:34:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.822 02:34:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.822 02:34:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.822 02:34:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.822 02:34:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.822 02:34:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.822 02:34:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.822 02:34:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.822 02:34:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.822 02:34:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.822 02:34:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.822 02:34:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.822 02:34:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.822 02:34:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.822 02:34:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.823 02:34:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.823 02:34:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.823 02:34:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.823 02:34:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.823 02:34:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.823 02:34:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.823 02:34:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.823 02:34:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.823 02:34:52 -- paths/export.sh@5 -- # export PATH 00:15:18.823 02:34:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.823 02:34:52 -- nvmf/common.sh@47 -- # : 0 00:15:18.823 02:34:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.823 02:34:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.823 02:34:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.823 02:34:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.823 02:34:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.823 02:34:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.823 02:34:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.823 02:34:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.823 02:34:52 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.823 02:34:52 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.823 02:34:52 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:18.823 02:34:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:18.823 02:34:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.823 02:34:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:18.823 02:34:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:18.823 02:34:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:18.823 02:34:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.823 02:34:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.823 02:34:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.823 02:34:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:18.823 02:34:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:18.823 02:34:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:18.823 02:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.414 02:34:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.414 02:34:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.414 02:34:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.414 02:34:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.414 02:34:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.414 02:34:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.414 02:34:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.414 02:34:58 -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.414 02:34:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.414 02:34:58 -- nvmf/common.sh@296 -- # e810=() 00:15:25.414 02:34:58 -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.414 02:34:58 -- nvmf/common.sh@297 -- # x722=() 00:15:25.414 02:34:58 -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.414 02:34:58 -- nvmf/common.sh@298 -- # mlx=() 00:15:25.414 02:34:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.414 02:34:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.414 02:34:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.414 02:34:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.414 02:34:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.414 02:34:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.414 02:34:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:25.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:25.414 02:34:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.414 02:34:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.414 02:34:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:25.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:25.415 02:34:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.415 02:34:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.415 02:34:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.415 02:34:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.415 02:34:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.415 02:34:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:25.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:25.415 02:34:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.415 02:34:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.415 02:34:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.415 02:34:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.415 02:34:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.415 02:34:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:25.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:25.415 02:34:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.415 02:34:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:25.415 02:34:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:25.415 02:34:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:25.415 02:34:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:25.415 02:34:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.415 02:34:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.415 02:34:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.415 02:34:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.415 02:34:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.415 02:34:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.415 02:34:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.415 02:34:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.415 02:34:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.415 02:34:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.415 02:34:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.415 02:34:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.415 02:34:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.415 02:34:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.415 02:34:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.415 02:34:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.415 02:34:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.415 02:34:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.415 02:34:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.415 02:34:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:15:25.415 00:15:25.415 --- 10.0.0.2 ping statistics --- 00:15:25.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.415 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:15:25.415 02:34:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:15:25.415 00:15:25.415 --- 10.0.0.1 ping statistics --- 00:15:25.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.415 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:15:25.415 02:34:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.415 02:34:59 -- nvmf/common.sh@411 -- # return 0 00:15:25.415 02:34:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:25.415 02:34:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.415 02:34:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:25.415 02:34:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:25.415 02:34:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.415 02:34:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:25.415 02:34:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:25.677 02:34:59 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:25.677 02:34:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:25.677 02:34:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:25.677 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.677 02:34:59 -- nvmf/common.sh@470 -- # nvmfpid=83821 00:15:25.677 02:34:59 -- nvmf/common.sh@471 -- # waitforlisten 83821 00:15:25.677 02:34:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:25.677 02:34:59 -- common/autotest_common.sh@817 -- # '[' -z 83821 ']' 00:15:25.677 02:34:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.677 02:34:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.677 02:34:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.677 02:34:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.677 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:25.677 [2024-04-27 02:34:59.097493] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:25.677 [2024-04-27 02:34:59.097541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.677 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.677 [2024-04-27 02:34:59.164612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.677 [2024-04-27 02:34:59.229199] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.677 [2024-04-27 02:34:59.229238] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.677 [2024-04-27 02:34:59.229246] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.677 [2024-04-27 02:34:59.229254] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.677 [2024-04-27 02:34:59.229261] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.677 [2024-04-27 02:34:59.229335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.677 [2024-04-27 02:34:59.229471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.677 [2024-04-27 02:34:59.229490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.677 [2024-04-27 02:34:59.229502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.621 02:34:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.621 02:34:59 -- common/autotest_common.sh@850 -- # return 0 00:15:26.621 02:34:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:26.621 02:34:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:26.621 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 02:34:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.621 02:34:59 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:26.621 02:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.621 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 02:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.621 02:34:59 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:26.621 02:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.621 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 02:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.621 02:34:59 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.621 02:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.621 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 [2024-04-27 02:34:59.973578] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.621 02:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.621 02:34:59 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:26.621 02:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.621 02:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 Malloc0 00:15:26.621 02:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.621 02:35:00 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:26.621 02:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.621 02:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 02:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.621 02:35:00 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.621 02:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.621 02:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:26.621 02:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.621 02:35:00 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.622 02:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.622 02:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:26.622 [2024-04-27 02:35:00.048632] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.622 02:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=83927 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@30 -- # READ_PID=83929 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # config=() 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # local subsystem config 00:15:26.622 02:35:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:26.622 { 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme$subsystem", 00:15:26.622 "trtype": "$TEST_TRANSPORT", 00:15:26.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "$NVMF_PORT", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:26.622 "hdgst": ${hdgst:-false}, 00:15:26.622 "ddgst": ${ddgst:-false} 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 } 00:15:26.622 EOF 00:15:26.622 )") 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=83931 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # config=() 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # local subsystem config 00:15:26.622 02:35:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=83934 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:26.622 { 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme$subsystem", 00:15:26.622 "trtype": "$TEST_TRANSPORT", 00:15:26.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "$NVMF_PORT", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:26.622 "hdgst": ${hdgst:-false}, 00:15:26.622 "ddgst": ${ddgst:-false} 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 } 00:15:26.622 EOF 00:15:26.622 )") 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@35 -- # sync 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # cat 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # config=() 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # local subsystem config 00:15:26.622 02:35:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:26.622 { 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme$subsystem", 00:15:26.622 "trtype": "$TEST_TRANSPORT", 00:15:26.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "$NVMF_PORT", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:26.622 "hdgst": ${hdgst:-false}, 00:15:26.622 "ddgst": ${ddgst:-false} 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 } 00:15:26.622 EOF 00:15:26.622 )") 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # config=() 00:15:26.622 02:35:00 -- nvmf/common.sh@521 -- # local subsystem config 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # cat 00:15:26.622 02:35:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:26.622 { 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme$subsystem", 00:15:26.622 "trtype": "$TEST_TRANSPORT", 00:15:26.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "$NVMF_PORT", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:26.622 "hdgst": ${hdgst:-false}, 00:15:26.622 "ddgst": ${ddgst:-false} 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 } 00:15:26.622 EOF 00:15:26.622 )") 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # cat 00:15:26.622 02:35:00 -- target/bdev_io_wait.sh@37 -- # wait 83927 00:15:26.622 02:35:00 -- nvmf/common.sh@543 -- # cat 00:15:26.622 02:35:00 -- nvmf/common.sh@545 -- # jq . 00:15:26.622 02:35:00 -- nvmf/common.sh@545 -- # jq . 00:15:26.622 02:35:00 -- nvmf/common.sh@545 -- # jq . 00:15:26.622 02:35:00 -- nvmf/common.sh@546 -- # IFS=, 00:15:26.622 02:35:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme1", 00:15:26.622 "trtype": "tcp", 00:15:26.622 "traddr": "10.0.0.2", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "4420", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.622 "hdgst": false, 00:15:26.622 "ddgst": false 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 }' 00:15:26.622 02:35:00 -- nvmf/common.sh@545 -- # jq . 00:15:26.622 02:35:00 -- nvmf/common.sh@546 -- # IFS=, 00:15:26.622 02:35:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme1", 00:15:26.622 "trtype": "tcp", 00:15:26.622 "traddr": "10.0.0.2", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "4420", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.622 "hdgst": false, 00:15:26.622 "ddgst": false 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 }' 00:15:26.622 02:35:00 -- nvmf/common.sh@546 -- # IFS=, 00:15:26.622 02:35:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme1", 00:15:26.622 "trtype": "tcp", 00:15:26.622 "traddr": "10.0.0.2", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "4420", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.622 "hdgst": false, 00:15:26.622 "ddgst": false 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 }' 00:15:26.622 02:35:00 -- nvmf/common.sh@546 -- # IFS=, 00:15:26.622 02:35:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:26.622 "params": { 00:15:26.622 "name": "Nvme1", 00:15:26.622 "trtype": "tcp", 00:15:26.622 "traddr": "10.0.0.2", 00:15:26.622 "adrfam": "ipv4", 00:15:26.622 "trsvcid": "4420", 00:15:26.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.622 "hdgst": false, 00:15:26.622 "ddgst": false 00:15:26.622 }, 00:15:26.622 "method": "bdev_nvme_attach_controller" 00:15:26.622 }' 00:15:26.622 [2024-04-27 02:35:00.102341] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:26.622 [2024-04-27 02:35:00.102395] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:26.622 [2024-04-27 02:35:00.102613] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:26.622 [2024-04-27 02:35:00.102657] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:26.622 [2024-04-27 02:35:00.103049] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:26.622 [2024-04-27 02:35:00.103091] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:26.622 [2024-04-27 02:35:00.106875] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:26.622 [2024-04-27 02:35:00.106924] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:26.622 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.622 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.883 [2024-04-27 02:35:00.248781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.883 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.883 [2024-04-27 02:35:00.297476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:26.883 [2024-04-27 02:35:00.310546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.883 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.883 [2024-04-27 02:35:00.355982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.883 [2024-04-27 02:35:00.360178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:26.883 [2024-04-27 02:35:00.404846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:26.883 [2024-04-27 02:35:00.416254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.883 [2024-04-27 02:35:00.465642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:27.144 Running I/O for 1 seconds... 00:15:27.144 Running I/O for 1 seconds... 00:15:27.144 Running I/O for 1 seconds... 00:15:27.144 Running I/O for 1 seconds... 00:15:28.089 00:15:28.089 Latency(us) 00:15:28.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.089 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:28.089 Nvme1n1 : 1.00 192206.27 750.81 0.00 0.00 663.14 259.41 785.07 00:15:28.089 =================================================================================================================== 00:15:28.089 Total : 192206.27 750.81 0.00 0.00 663.14 259.41 785.07 00:15:28.089 00:15:28.089 Latency(us) 00:15:28.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.089 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:28.089 Nvme1n1 : 1.01 8608.24 33.63 0.00 0.00 14762.07 3850.24 20097.71 00:15:28.089 =================================================================================================================== 00:15:28.089 Total : 8608.24 33.63 0.00 0.00 14762.07 3850.24 20097.71 00:15:28.089 00:15:28.089 Latency(us) 00:15:28.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.089 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:28.089 Nvme1n1 : 1.01 17463.37 68.22 0.00 0.00 7306.91 2798.93 19770.03 00:15:28.089 =================================================================================================================== 00:15:28.089 Total : 17463.37 68.22 0.00 0.00 7306.91 2798.93 19770.03 00:15:28.089 00:15:28.089 Latency(us) 00:15:28.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.089 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:28.089 Nvme1n1 : 1.00 8305.73 32.44 0.00 0.00 15369.86 4505.60 38666.24 00:15:28.089 =================================================================================================================== 00:15:28.089 Total : 8305.73 32.44 0.00 0.00 15369.86 4505.60 38666.24 00:15:28.350 02:35:01 -- target/bdev_io_wait.sh@38 -- # wait 83929 00:15:28.350 02:35:01 -- target/bdev_io_wait.sh@39 -- # wait 83931 00:15:28.350 02:35:01 -- target/bdev_io_wait.sh@40 -- # wait 83934 00:15:28.350 02:35:01 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.350 02:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.350 02:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:28.350 02:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.350 02:35:01 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:28.350 02:35:01 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:28.350 02:35:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:28.350 02:35:01 -- nvmf/common.sh@117 -- # sync 00:15:28.350 02:35:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.350 02:35:01 -- nvmf/common.sh@120 -- # set +e 00:15:28.350 02:35:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.350 02:35:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.350 rmmod nvme_tcp 00:15:28.350 rmmod nvme_fabrics 00:15:28.350 rmmod nvme_keyring 00:15:28.350 02:35:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.350 02:35:01 -- nvmf/common.sh@124 -- # set -e 00:15:28.350 02:35:01 -- nvmf/common.sh@125 -- # return 0 00:15:28.350 02:35:01 -- nvmf/common.sh@478 -- # '[' -n 83821 ']' 00:15:28.350 02:35:01 -- nvmf/common.sh@479 -- # killprocess 83821 00:15:28.350 02:35:01 -- common/autotest_common.sh@936 -- # '[' -z 83821 ']' 00:15:28.350 02:35:01 -- common/autotest_common.sh@940 -- # kill -0 83821 00:15:28.350 02:35:01 -- common/autotest_common.sh@941 -- # uname 00:15:28.350 02:35:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.350 02:35:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83821 00:15:28.350 02:35:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:28.350 02:35:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:28.350 02:35:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83821' 00:15:28.350 killing process with pid 83821 00:15:28.350 02:35:01 -- common/autotest_common.sh@955 -- # kill 83821 00:15:28.350 02:35:01 -- common/autotest_common.sh@960 -- # wait 83821 00:15:28.631 02:35:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:28.631 02:35:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:28.631 02:35:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:28.631 02:35:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.631 02:35:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.631 02:35:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.631 02:35:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.631 02:35:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.546 02:35:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.547 00:15:30.547 real 0m11.850s 00:15:30.547 user 0m18.416s 00:15:30.547 sys 0m6.328s 00:15:30.547 02:35:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.547 02:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:30.547 ************************************ 00:15:30.547 END TEST nvmf_bdev_io_wait 00:15:30.547 ************************************ 00:15:30.807 02:35:04 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:30.807 02:35:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.808 02:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.808 02:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:30.808 ************************************ 00:15:30.808 START TEST nvmf_queue_depth 00:15:30.808 ************************************ 00:15:30.808 02:35:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:30.808 * Looking for test storage... 00:15:31.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.069 02:35:04 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.069 02:35:04 -- nvmf/common.sh@7 -- # uname -s 00:15:31.069 02:35:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.069 02:35:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.069 02:35:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.069 02:35:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.069 02:35:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.069 02:35:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.069 02:35:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.069 02:35:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.069 02:35:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.069 02:35:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.069 02:35:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.069 02:35:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.069 02:35:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.069 02:35:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.069 02:35:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.069 02:35:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.069 02:35:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.069 02:35:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.069 02:35:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.069 02:35:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.069 02:35:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.069 02:35:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.069 02:35:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.069 02:35:04 -- paths/export.sh@5 -- # export PATH 00:15:31.069 02:35:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.069 02:35:04 -- nvmf/common.sh@47 -- # : 0 00:15:31.069 02:35:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.069 02:35:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.069 02:35:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.069 02:35:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.069 02:35:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.069 02:35:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.069 02:35:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.070 02:35:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.070 02:35:04 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:31.070 02:35:04 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:31.070 02:35:04 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.070 02:35:04 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:31.070 02:35:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:31.070 02:35:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.070 02:35:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:31.070 02:35:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:31.070 02:35:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:31.070 02:35:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.070 02:35:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.070 02:35:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.070 02:35:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:31.070 02:35:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:31.070 02:35:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.070 02:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:37.666 02:35:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.666 02:35:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.666 02:35:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.666 02:35:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.666 02:35:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.666 02:35:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.666 02:35:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.666 02:35:11 -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.666 02:35:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.666 02:35:11 -- nvmf/common.sh@296 -- # e810=() 00:15:37.666 02:35:11 -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.666 02:35:11 -- nvmf/common.sh@297 -- # x722=() 00:15:37.666 02:35:11 -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.666 02:35:11 -- nvmf/common.sh@298 -- # mlx=() 00:15:37.666 02:35:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.666 02:35:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.666 02:35:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.666 02:35:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.666 02:35:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.666 02:35:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.666 02:35:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:37.666 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:37.666 02:35:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.666 02:35:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:37.666 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:37.666 02:35:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.666 02:35:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.666 02:35:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.666 02:35:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.666 02:35:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.666 02:35:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.667 02:35:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:37.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:37.667 02:35:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.667 02:35:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.667 02:35:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.667 02:35:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:37.667 02:35:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.667 02:35:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:37.667 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:37.667 02:35:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.667 02:35:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:37.667 02:35:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:37.667 02:35:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:37.667 02:35:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:37.667 02:35:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:37.667 02:35:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.667 02:35:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.667 02:35:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.667 02:35:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.667 02:35:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.667 02:35:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.667 02:35:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.667 02:35:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.667 02:35:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.667 02:35:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.667 02:35:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.667 02:35:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.667 02:35:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.667 02:35:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.667 02:35:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.667 02:35:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.667 02:35:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.928 02:35:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.928 02:35:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.928 02:35:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:15:37.928 00:15:37.928 --- 10.0.0.2 ping statistics --- 00:15:37.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.928 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:15:37.928 02:35:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:15:37.928 00:15:37.928 --- 10.0.0.1 ping statistics --- 00:15:37.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.928 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:15:37.928 02:35:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.928 02:35:11 -- nvmf/common.sh@411 -- # return 0 00:15:37.928 02:35:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:37.928 02:35:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.928 02:35:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:37.928 02:35:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:37.928 02:35:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.928 02:35:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:37.928 02:35:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:37.928 02:35:11 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:37.928 02:35:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:37.928 02:35:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:37.928 02:35:11 -- common/autotest_common.sh@10 -- # set +x 00:15:37.928 02:35:11 -- nvmf/common.sh@470 -- # nvmfpid=88614 00:15:37.928 02:35:11 -- nvmf/common.sh@471 -- # waitforlisten 88614 00:15:37.928 02:35:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:37.928 02:35:11 -- common/autotest_common.sh@817 -- # '[' -z 88614 ']' 00:15:37.928 02:35:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.929 02:35:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.929 02:35:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.929 02:35:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.929 02:35:11 -- common/autotest_common.sh@10 -- # set +x 00:15:37.929 [2024-04-27 02:35:11.450396] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:37.929 [2024-04-27 02:35:11.450460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.929 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.929 [2024-04-27 02:35:11.521906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.189 [2024-04-27 02:35:11.593461] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.189 [2024-04-27 02:35:11.593495] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.189 [2024-04-27 02:35:11.593503] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.189 [2024-04-27 02:35:11.593509] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.189 [2024-04-27 02:35:11.593515] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.189 [2024-04-27 02:35:11.593541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.762 02:35:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.762 02:35:12 -- common/autotest_common.sh@850 -- # return 0 00:15:38.762 02:35:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:38.762 02:35:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 02:35:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.762 02:35:12 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.762 02:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 [2024-04-27 02:35:12.260293] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.762 02:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.762 02:35:12 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.762 02:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 Malloc0 00:15:38.762 02:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.762 02:35:12 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.762 02:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 02:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.762 02:35:12 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.762 02:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 02:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.762 02:35:12 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.762 02:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 [2024-04-27 02:35:12.316729] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.762 02:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.762 02:35:12 -- target/queue_depth.sh@30 -- # bdevperf_pid=88728 00:15:38.762 02:35:12 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:38.762 02:35:12 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:38.762 02:35:12 -- target/queue_depth.sh@33 -- # waitforlisten 88728 /var/tmp/bdevperf.sock 00:15:38.762 02:35:12 -- common/autotest_common.sh@817 -- # '[' -z 88728 ']' 00:15:38.762 02:35:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.762 02:35:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.762 02:35:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.762 02:35:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.762 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:38.762 [2024-04-27 02:35:12.344576] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:38.762 [2024-04-27 02:35:12.344611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88728 ] 00:15:38.762 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.024 [2024-04-27 02:35:12.395266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.024 [2024-04-27 02:35:12.458050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.024 02:35:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:39.024 02:35:12 -- common/autotest_common.sh@850 -- # return 0 00:15:39.024 02:35:12 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:39.024 02:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.024 02:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:39.285 NVMe0n1 00:15:39.285 02:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.285 02:35:12 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:39.285 Running I/O for 10 seconds... 00:15:49.398 00:15:49.398 Latency(us) 00:15:49.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.398 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:49.398 Verification LBA range: start 0x0 length 0x4000 00:15:49.398 NVMe0n1 : 10.09 9527.08 37.22 0.00 0.00 106993.91 24794.45 72963.41 00:15:49.398 =================================================================================================================== 00:15:49.398 Total : 9527.08 37.22 0.00 0.00 106993.91 24794.45 72963.41 00:15:49.398 0 00:15:49.398 02:35:22 -- target/queue_depth.sh@39 -- # killprocess 88728 00:15:49.398 02:35:22 -- common/autotest_common.sh@936 -- # '[' -z 88728 ']' 00:15:49.398 02:35:22 -- common/autotest_common.sh@940 -- # kill -0 88728 00:15:49.398 02:35:22 -- common/autotest_common.sh@941 -- # uname 00:15:49.398 02:35:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.398 02:35:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88728 00:15:49.398 02:35:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:49.398 02:35:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:49.398 02:35:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88728' 00:15:49.398 killing process with pid 88728 00:15:49.398 02:35:22 -- common/autotest_common.sh@955 -- # kill 88728 00:15:49.398 Received shutdown signal, test time was about 10.000000 seconds 00:15:49.398 00:15:49.398 Latency(us) 00:15:49.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.398 =================================================================================================================== 00:15:49.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:49.398 02:35:22 -- common/autotest_common.sh@960 -- # wait 88728 00:15:49.659 02:35:23 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:49.659 02:35:23 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:49.659 02:35:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:49.659 02:35:23 -- nvmf/common.sh@117 -- # sync 00:15:49.659 02:35:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.659 02:35:23 -- nvmf/common.sh@120 -- # set +e 00:15:49.659 02:35:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.659 02:35:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.659 rmmod nvme_tcp 00:15:49.659 rmmod nvme_fabrics 00:15:49.659 rmmod nvme_keyring 00:15:49.659 02:35:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.659 02:35:23 -- nvmf/common.sh@124 -- # set -e 00:15:49.659 02:35:23 -- nvmf/common.sh@125 -- # return 0 00:15:49.659 02:35:23 -- nvmf/common.sh@478 -- # '[' -n 88614 ']' 00:15:49.659 02:35:23 -- nvmf/common.sh@479 -- # killprocess 88614 00:15:49.659 02:35:23 -- common/autotest_common.sh@936 -- # '[' -z 88614 ']' 00:15:49.659 02:35:23 -- common/autotest_common.sh@940 -- # kill -0 88614 00:15:49.659 02:35:23 -- common/autotest_common.sh@941 -- # uname 00:15:49.659 02:35:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.659 02:35:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88614 00:15:49.659 02:35:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:49.659 02:35:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:49.659 02:35:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88614' 00:15:49.659 killing process with pid 88614 00:15:49.659 02:35:23 -- common/autotest_common.sh@955 -- # kill 88614 00:15:49.659 02:35:23 -- common/autotest_common.sh@960 -- # wait 88614 00:15:49.920 02:35:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:49.920 02:35:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:49.920 02:35:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:49.920 02:35:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.920 02:35:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.920 02:35:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.920 02:35:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.920 02:35:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.835 02:35:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.835 00:15:51.835 real 0m21.018s 00:15:51.835 user 0m24.184s 00:15:51.835 sys 0m6.103s 00:15:51.835 02:35:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:51.835 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:15:51.835 ************************************ 00:15:51.835 END TEST nvmf_queue_depth 00:15:51.835 ************************************ 00:15:51.835 02:35:25 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.835 02:35:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.835 02:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.835 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.097 ************************************ 00:15:52.097 START TEST nvmf_multipath 00:15:52.097 ************************************ 00:15:52.097 02:35:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:52.097 * Looking for test storage... 00:15:52.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.097 02:35:25 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.097 02:35:25 -- nvmf/common.sh@7 -- # uname -s 00:15:52.097 02:35:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.097 02:35:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.097 02:35:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.097 02:35:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.097 02:35:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.097 02:35:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.097 02:35:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.097 02:35:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.097 02:35:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.097 02:35:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.097 02:35:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:52.097 02:35:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:52.097 02:35:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.097 02:35:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.097 02:35:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.097 02:35:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.097 02:35:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.097 02:35:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.097 02:35:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.097 02:35:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.097 02:35:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.097 02:35:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.097 02:35:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.097 02:35:25 -- paths/export.sh@5 -- # export PATH 00:15:52.097 02:35:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.097 02:35:25 -- nvmf/common.sh@47 -- # : 0 00:15:52.097 02:35:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.097 02:35:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.097 02:35:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.097 02:35:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.097 02:35:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.097 02:35:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.097 02:35:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.097 02:35:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.097 02:35:25 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.097 02:35:25 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.097 02:35:25 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:52.097 02:35:25 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.097 02:35:25 -- target/multipath.sh@43 -- # nvmftestinit 00:15:52.097 02:35:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:52.097 02:35:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.097 02:35:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:52.097 02:35:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:52.097 02:35:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:52.097 02:35:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.097 02:35:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.097 02:35:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.097 02:35:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:52.097 02:35:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:52.097 02:35:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:52.097 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:15:58.697 02:35:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:58.697 02:35:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.697 02:35:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.697 02:35:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.697 02:35:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.697 02:35:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.697 02:35:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.697 02:35:32 -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.697 02:35:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.697 02:35:32 -- nvmf/common.sh@296 -- # e810=() 00:15:58.697 02:35:32 -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.697 02:35:32 -- nvmf/common.sh@297 -- # x722=() 00:15:58.697 02:35:32 -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.697 02:35:32 -- nvmf/common.sh@298 -- # mlx=() 00:15:58.697 02:35:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.697 02:35:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.697 02:35:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.697 02:35:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:58.697 02:35:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.697 02:35:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.697 02:35:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:58.697 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:58.697 02:35:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.697 02:35:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.697 02:35:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:58.697 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:58.958 02:35:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.958 02:35:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.958 02:35:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.958 02:35:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:58.958 02:35:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.958 02:35:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:58.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:58.958 02:35:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.958 02:35:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.958 02:35:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.958 02:35:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:58.958 02:35:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.958 02:35:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:58.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:58.958 02:35:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.958 02:35:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:58.958 02:35:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:58.958 02:35:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:58.958 02:35:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:58.958 02:35:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.958 02:35:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.958 02:35:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.958 02:35:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:58.958 02:35:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.958 02:35:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.958 02:35:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:58.958 02:35:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.958 02:35:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.958 02:35:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:58.958 02:35:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:58.958 02:35:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.958 02:35:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.958 02:35:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.958 02:35:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.958 02:35:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:58.958 02:35:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.220 02:35:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.220 02:35:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.220 02:35:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:15:59.220 00:15:59.220 --- 10.0.0.2 ping statistics --- 00:15:59.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.220 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:15:59.220 02:35:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:15:59.220 00:15:59.220 --- 10.0.0.1 ping statistics --- 00:15:59.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.220 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:15:59.220 02:35:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.220 02:35:32 -- nvmf/common.sh@411 -- # return 0 00:15:59.220 02:35:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:59.220 02:35:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.220 02:35:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:59.220 02:35:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:59.220 02:35:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.220 02:35:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:59.220 02:35:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:59.220 02:35:32 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:59.220 02:35:32 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:59.220 only one NIC for nvmf test 00:15:59.220 02:35:32 -- target/multipath.sh@47 -- # nvmftestfini 00:15:59.220 02:35:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:59.220 02:35:32 -- nvmf/common.sh@117 -- # sync 00:15:59.220 02:35:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.220 02:35:32 -- nvmf/common.sh@120 -- # set +e 00:15:59.220 02:35:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.220 02:35:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.220 rmmod nvme_tcp 00:15:59.220 rmmod nvme_fabrics 00:15:59.220 rmmod nvme_keyring 00:15:59.220 02:35:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.220 02:35:32 -- nvmf/common.sh@124 -- # set -e 00:15:59.220 02:35:32 -- nvmf/common.sh@125 -- # return 0 00:15:59.220 02:35:32 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:59.220 02:35:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:59.220 02:35:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:59.220 02:35:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:59.220 02:35:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.220 02:35:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.220 02:35:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.220 02:35:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.220 02:35:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.766 02:35:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.766 02:35:34 -- target/multipath.sh@48 -- # exit 0 00:16:01.766 02:35:34 -- target/multipath.sh@1 -- # nvmftestfini 00:16:01.766 02:35:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:01.766 02:35:34 -- nvmf/common.sh@117 -- # sync 00:16:01.766 02:35:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.766 02:35:34 -- nvmf/common.sh@120 -- # set +e 00:16:01.766 02:35:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.766 02:35:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.766 02:35:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.766 02:35:34 -- nvmf/common.sh@124 -- # set -e 00:16:01.766 02:35:34 -- nvmf/common.sh@125 -- # return 0 00:16:01.766 02:35:34 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:01.766 02:35:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:01.766 02:35:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:01.766 02:35:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:01.766 02:35:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.766 02:35:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.766 02:35:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.766 02:35:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.766 02:35:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.766 02:35:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.766 00:16:01.766 real 0m9.340s 00:16:01.766 user 0m2.059s 00:16:01.766 sys 0m5.166s 00:16:01.766 02:35:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:01.766 02:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:01.766 ************************************ 00:16:01.766 END TEST nvmf_multipath 00:16:01.766 ************************************ 00:16:01.766 02:35:34 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:01.766 02:35:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:01.766 02:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.766 02:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:01.766 ************************************ 00:16:01.766 START TEST nvmf_zcopy 00:16:01.766 ************************************ 00:16:01.766 02:35:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:01.766 * Looking for test storage... 00:16:01.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.766 02:35:35 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.766 02:35:35 -- nvmf/common.sh@7 -- # uname -s 00:16:01.766 02:35:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.766 02:35:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.766 02:35:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.767 02:35:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.767 02:35:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.767 02:35:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.767 02:35:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.767 02:35:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.767 02:35:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.767 02:35:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.767 02:35:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.767 02:35:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.767 02:35:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.767 02:35:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.767 02:35:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.767 02:35:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.767 02:35:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.767 02:35:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.767 02:35:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.767 02:35:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.767 02:35:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.767 02:35:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.767 02:35:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.767 02:35:35 -- paths/export.sh@5 -- # export PATH 00:16:01.767 02:35:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.767 02:35:35 -- nvmf/common.sh@47 -- # : 0 00:16:01.767 02:35:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.767 02:35:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.767 02:35:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.767 02:35:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.767 02:35:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.767 02:35:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.767 02:35:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.767 02:35:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.767 02:35:35 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:01.767 02:35:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:01.767 02:35:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.767 02:35:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:01.767 02:35:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:01.767 02:35:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:01.767 02:35:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.767 02:35:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.767 02:35:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.767 02:35:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:01.767 02:35:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:01.767 02:35:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.767 02:35:35 -- common/autotest_common.sh@10 -- # set +x 00:16:09.918 02:35:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:09.918 02:35:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.918 02:35:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.918 02:35:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.918 02:35:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.918 02:35:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.918 02:35:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.918 02:35:41 -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.918 02:35:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.918 02:35:41 -- nvmf/common.sh@296 -- # e810=() 00:16:09.918 02:35:41 -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.918 02:35:41 -- nvmf/common.sh@297 -- # x722=() 00:16:09.918 02:35:41 -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.918 02:35:41 -- nvmf/common.sh@298 -- # mlx=() 00:16:09.918 02:35:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.918 02:35:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.918 02:35:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.918 02:35:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.918 02:35:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.918 02:35:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.918 02:35:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:09.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:09.918 02:35:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.918 02:35:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:09.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:09.918 02:35:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.918 02:35:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.918 02:35:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.918 02:35:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.918 02:35:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:09.918 02:35:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.918 02:35:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:09.918 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:09.918 02:35:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.918 02:35:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.918 02:35:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.918 02:35:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:09.918 02:35:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.918 02:35:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:09.918 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:09.918 02:35:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.918 02:35:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:09.918 02:35:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:09.918 02:35:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:09.918 02:35:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:09.918 02:35:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:09.918 02:35:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.918 02:35:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.918 02:35:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.918 02:35:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.918 02:35:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.918 02:35:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.918 02:35:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.918 02:35:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.918 02:35:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.918 02:35:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.918 02:35:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.918 02:35:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.918 02:35:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.918 02:35:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.918 02:35:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.918 02:35:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.918 02:35:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.918 02:35:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.918 02:35:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.918 02:35:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:16:09.918 00:16:09.918 --- 10.0.0.2 ping statistics --- 00:16:09.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.918 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:16:09.918 02:35:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:16:09.918 00:16:09.918 --- 10.0.0.1 ping statistics --- 00:16:09.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.918 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:16:09.918 02:35:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.918 02:35:42 -- nvmf/common.sh@411 -- # return 0 00:16:09.918 02:35:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:09.918 02:35:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.918 02:35:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:09.918 02:35:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:09.918 02:35:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.918 02:35:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:09.918 02:35:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:09.918 02:35:42 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:09.918 02:35:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:09.918 02:35:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:09.918 02:35:42 -- common/autotest_common.sh@10 -- # set +x 00:16:09.918 02:35:42 -- nvmf/common.sh@470 -- # nvmfpid=99200 00:16:09.918 02:35:42 -- nvmf/common.sh@471 -- # waitforlisten 99200 00:16:09.918 02:35:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:09.918 02:35:42 -- common/autotest_common.sh@817 -- # '[' -z 99200 ']' 00:16:09.918 02:35:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.918 02:35:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:09.918 02:35:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.918 02:35:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:09.918 02:35:42 -- common/autotest_common.sh@10 -- # set +x 00:16:09.918 [2024-04-27 02:35:42.416904] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:09.918 [2024-04-27 02:35:42.416969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.918 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.918 [2024-04-27 02:35:42.487967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.918 [2024-04-27 02:35:42.558712] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.918 [2024-04-27 02:35:42.558748] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.918 [2024-04-27 02:35:42.558756] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.918 [2024-04-27 02:35:42.558762] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.918 [2024-04-27 02:35:42.558768] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.918 [2024-04-27 02:35:42.558786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.918 02:35:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:09.918 02:35:43 -- common/autotest_common.sh@850 -- # return 0 00:16:09.918 02:35:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:09.918 02:35:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:09.918 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.918 02:35:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.918 02:35:43 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:09.918 02:35:43 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:09.918 02:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.918 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.919 [2024-04-27 02:35:43.225320] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.919 02:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.919 02:35:43 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.919 02:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.919 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.919 02:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.919 02:35:43 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.919 02:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.919 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.919 [2024-04-27 02:35:43.249479] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.919 02:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.919 02:35:43 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:09.919 02:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.919 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.919 02:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.919 02:35:43 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:09.919 02:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.919 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.919 malloc0 00:16:09.919 02:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.919 02:35:43 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.919 02:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.919 02:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:09.919 02:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.919 02:35:43 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:09.919 02:35:43 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:09.919 02:35:43 -- nvmf/common.sh@521 -- # config=() 00:16:09.919 02:35:43 -- nvmf/common.sh@521 -- # local subsystem config 00:16:09.919 02:35:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:09.919 02:35:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:09.919 { 00:16:09.919 "params": { 00:16:09.919 "name": "Nvme$subsystem", 00:16:09.919 "trtype": "$TEST_TRANSPORT", 00:16:09.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.919 "adrfam": "ipv4", 00:16:09.919 "trsvcid": "$NVMF_PORT", 00:16:09.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.919 "hdgst": ${hdgst:-false}, 00:16:09.919 "ddgst": ${ddgst:-false} 00:16:09.919 }, 00:16:09.919 "method": "bdev_nvme_attach_controller" 00:16:09.919 } 00:16:09.919 EOF 00:16:09.919 )") 00:16:09.919 02:35:43 -- nvmf/common.sh@543 -- # cat 00:16:09.919 02:35:43 -- nvmf/common.sh@545 -- # jq . 00:16:09.919 02:35:43 -- nvmf/common.sh@546 -- # IFS=, 00:16:09.919 02:35:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:09.919 "params": { 00:16:09.919 "name": "Nvme1", 00:16:09.919 "trtype": "tcp", 00:16:09.919 "traddr": "10.0.0.2", 00:16:09.919 "adrfam": "ipv4", 00:16:09.919 "trsvcid": "4420", 00:16:09.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.919 "hdgst": false, 00:16:09.919 "ddgst": false 00:16:09.919 }, 00:16:09.919 "method": "bdev_nvme_attach_controller" 00:16:09.919 }' 00:16:09.919 [2024-04-27 02:35:43.342251] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:09.919 [2024-04-27 02:35:43.342303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99333 ] 00:16:09.919 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.919 [2024-04-27 02:35:43.400133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.919 [2024-04-27 02:35:43.463916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.180 Running I/O for 10 seconds... 00:16:20.185 00:16:20.185 Latency(us) 00:16:20.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.185 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:20.185 Verification LBA range: start 0x0 length 0x1000 00:16:20.185 Nvme1n1 : 10.02 6682.74 52.21 0.00 0.00 19097.44 3454.29 41287.68 00:16:20.185 =================================================================================================================== 00:16:20.185 Total : 6682.74 52.21 0.00 0.00 19097.44 3454.29 41287.68 00:16:20.185 02:35:53 -- target/zcopy.sh@39 -- # perfpid=101338 00:16:20.185 02:35:53 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:20.185 02:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.185 02:35:53 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:20.185 02:35:53 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:20.185 02:35:53 -- nvmf/common.sh@521 -- # config=() 00:16:20.185 02:35:53 -- nvmf/common.sh@521 -- # local subsystem config 00:16:20.185 02:35:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:20.185 02:35:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:20.185 { 00:16:20.185 "params": { 00:16:20.185 "name": "Nvme$subsystem", 00:16:20.185 "trtype": "$TEST_TRANSPORT", 00:16:20.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:20.185 "adrfam": "ipv4", 00:16:20.185 "trsvcid": "$NVMF_PORT", 00:16:20.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:20.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:20.185 "hdgst": ${hdgst:-false}, 00:16:20.185 "ddgst": ${ddgst:-false} 00:16:20.185 }, 00:16:20.185 "method": "bdev_nvme_attach_controller" 00:16:20.185 } 00:16:20.185 EOF 00:16:20.185 )") 00:16:20.185 [2024-04-27 02:35:53.784164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.185 [2024-04-27 02:35:53.784196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.185 02:35:53 -- nvmf/common.sh@543 -- # cat 00:16:20.185 02:35:53 -- nvmf/common.sh@545 -- # jq . 00:16:20.185 02:35:53 -- nvmf/common.sh@546 -- # IFS=, 00:16:20.185 02:35:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:20.185 "params": { 00:16:20.185 "name": "Nvme1", 00:16:20.185 "trtype": "tcp", 00:16:20.185 "traddr": "10.0.0.2", 00:16:20.185 "adrfam": "ipv4", 00:16:20.185 "trsvcid": "4420", 00:16:20.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:20.185 "hdgst": false, 00:16:20.185 "ddgst": false 00:16:20.185 }, 00:16:20.185 "method": "bdev_nvme_attach_controller" 00:16:20.185 }' 00:16:20.185 [2024-04-27 02:35:53.796165] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.185 [2024-04-27 02:35:53.796176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.808193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.808203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.820226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.820235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.828925] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:20.446 [2024-04-27 02:35:53.828981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101338 ] 00:16:20.446 [2024-04-27 02:35:53.832256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.832266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.844289] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.844298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.446 [2024-04-27 02:35:53.856321] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.856330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.868351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.868361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.880384] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.880393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.887609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.446 [2024-04-27 02:35:53.892415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.892426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.904446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.904456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.916477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.916488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.928509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.928522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.940540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.940550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.949542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.446 [2024-04-27 02:35:53.952574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.952583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.964610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.964629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.976638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.446 [2024-04-27 02:35:53.976650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.446 [2024-04-27 02:35:53.988671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:53.988682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.447 [2024-04-27 02:35:54.000703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:54.000715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.447 [2024-04-27 02:35:54.012737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:54.012748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.447 [2024-04-27 02:35:54.024779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:54.024795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.447 [2024-04-27 02:35:54.036806] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:54.036817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.447 [2024-04-27 02:35:54.048840] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:54.048853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.447 [2024-04-27 02:35:54.060869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.447 [2024-04-27 02:35:54.060879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.072903] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.072913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.084936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.084945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.096968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.096980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.109000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.109012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.122869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.122886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.133067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.133079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 Running I/O for 5 seconds... 00:16:20.708 [2024-04-27 02:35:54.154776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.154795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.170590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.170609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.181804] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.181822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.198121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.198139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.214708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.214726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.231877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.231895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.248553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.248571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.265858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.265876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.281502] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.281520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.299070] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.299088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.708 [2024-04-27 02:35:54.316178] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.708 [2024-04-27 02:35:54.316197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.332778] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.332796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.349567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.349585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.367254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.367273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.383474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.383493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.400607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.400625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.417859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.417876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.432835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.432853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.450071] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.450089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.465124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.465142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.482329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.482348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.496968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.496986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.513899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.513917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.529444] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.529463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.540881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.540898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.558043] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.558061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.970 [2024-04-27 02:35:54.574009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.970 [2024-04-27 02:35:54.574027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.590268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.590291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.608439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.608458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.625604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.625622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.643127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.643145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.658902] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.658919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.676172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.676191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.691003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.691020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.708070] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.708088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.723639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.723657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.740905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.740922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.756389] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.756407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.767762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.767780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.784534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.784552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.800388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.800405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.818133] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.818150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.834177] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.834195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.231 [2024-04-27 02:35:54.851421] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.231 [2024-04-27 02:35:54.851439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.868601] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.868620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.886344] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.886362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.902119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.902136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.913425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.913442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.930581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.930599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.946694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.946711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.958230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.958248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.975203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.975220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:54.992808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:54.992826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.008476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.008493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.026660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.026678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.042130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.042148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.053259] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.053283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.069981] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.069998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.085673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.085691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.492 [2024-04-27 02:35:55.097113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.492 [2024-04-27 02:35:55.097131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.114405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.114427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.130319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.130337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.148262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.148285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.164734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.164751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.182318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.182336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.198715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.198732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.216459] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.216476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.232683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.232701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.250044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.250061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.265963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.265980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.283130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.283148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.300712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.300730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.315740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.315758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.331115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.331132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.348816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.348833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.753 [2024-04-27 02:35:55.365520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.753 [2024-04-27 02:35:55.365537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.382983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.383000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.400169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.400186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.417300] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.417317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.434056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.434078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.452062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.452080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.467719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.467736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.485138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.485156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.500949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.500967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.518476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.518493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.535645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.535662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.552383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.552400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.569862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.569880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.585085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.585102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.600616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.600635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.611628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.611646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.015 [2024-04-27 02:35:55.628206] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.015 [2024-04-27 02:35:55.628224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.645071] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.645089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.662613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.662631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.678609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.678627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.695538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.695556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.711223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.711241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.729229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.729248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.744605] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.744627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.756073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.756091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.772646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.772665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.788466] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.788485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.806373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.806392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.822516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.822534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.840018] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.840037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.855951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.855969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.873652] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.873671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.277 [2024-04-27 02:35:55.890375] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.277 [2024-04-27 02:35:55.890394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:55.908459] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:55.908478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:55.924029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:55.924047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:55.941516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:55.941534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:55.957022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:55.957040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:55.968029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:55.968047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:55.985088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:55.985107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.001292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.001311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.019069] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.019088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.035341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.035359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.053365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.053387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.070299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.070318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.087677] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.087696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.104733] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.104751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.121586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.121605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.138511] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.138530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.539 [2024-04-27 02:35:56.153750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.539 [2024-04-27 02:35:56.153768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.164980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.164999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.182052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.182071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.197123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.197142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.208644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.208662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.225775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.225794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.243839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.243858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.258856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.258875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.270503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.270522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.286788] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.286806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.305019] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.305038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.320236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.320254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.331799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.331818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.348109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.348133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.364057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.364075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.381699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.381718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.398809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.398827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.801 [2024-04-27 02:35:56.415818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.801 [2024-04-27 02:35:56.415836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.432410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.432429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.450468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.450487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.465053] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.465071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.481749] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.481767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.497494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.497513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.514340] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.514357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.531963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.531981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.549464] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.549482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.565146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.565163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.576385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.576403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.592661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.592678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.608033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.608051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.619517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.619535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.636159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.636178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.651716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.651734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.663110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.663128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.063 [2024-04-27 02:35:56.679128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.063 [2024-04-27 02:35:56.679146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.325 [2024-04-27 02:35:56.695537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.325 [2024-04-27 02:35:56.695555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.325 [2024-04-27 02:35:56.712247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.325 [2024-04-27 02:35:56.712265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.325 [2024-04-27 02:35:56.729240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.325 [2024-04-27 02:35:56.729258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.325 [2024-04-27 02:35:56.746557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.325 [2024-04-27 02:35:56.746574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.325 [2024-04-27 02:35:56.763620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.325 [2024-04-27 02:35:56.763638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.325 [2024-04-27 02:35:56.781443] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.325 [2024-04-27 02:35:56.781462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.797135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.797153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.813947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.813965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.830740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.830758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.848155] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.848173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.864253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.864272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.881476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.881494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.896313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.896332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.913505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.913524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.929512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.929531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.326 [2024-04-27 02:35:56.940854] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.326 [2024-04-27 02:35:56.940871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:56.957445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:56.957464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:56.974211] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:56.974230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:56.990398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:56.990417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.007865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.007883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.023746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.023764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.040590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.040608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.058027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.058046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.073611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.073629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.091612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.091631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.107937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.107955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.125347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.125365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.141093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.141112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.152609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.152627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.169318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.169336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.186618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.186636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.588 [2024-04-27 02:35:57.202417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.588 [2024-04-27 02:35:57.202436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.219205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.219223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.236699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.236718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.252657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.252675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.269500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.269518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.287455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.287474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.304391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.304409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.321177] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.321195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.338511] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.338530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.354989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.355007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.372468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.372487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.388901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.850 [2024-04-27 02:35:57.388919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.850 [2024-04-27 02:35:57.400260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.851 [2024-04-27 02:35:57.400284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.851 [2024-04-27 02:35:57.416718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.851 [2024-04-27 02:35:57.416737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.851 [2024-04-27 02:35:57.434263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.851 [2024-04-27 02:35:57.434287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.851 [2024-04-27 02:35:57.449385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.851 [2024-04-27 02:35:57.449405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.851 [2024-04-27 02:35:57.461319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.851 [2024-04-27 02:35:57.461339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.112 [2024-04-27 02:35:57.477844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.112 [2024-04-27 02:35:57.477862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.112 [2024-04-27 02:35:57.495064] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.112 [2024-04-27 02:35:57.495082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.112 [2024-04-27 02:35:57.511182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.112 [2024-04-27 02:35:57.511200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.112 [2024-04-27 02:35:57.522168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.112 [2024-04-27 02:35:57.522186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.538864] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.538882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.554565] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.554584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.566142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.566161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.582618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.582636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.599203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.599221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.615471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.615490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.633030] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.633048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.648138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.648157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.665302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.665321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.681029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.681047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.698358] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.698377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.713898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.713916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.113 [2024-04-27 02:35:57.725457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.113 [2024-04-27 02:35:57.725475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.742470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.742489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.759975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.759993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.777072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.777090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.794075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.794094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.809877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.809895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.826968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.826987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.844049] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.844067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.859863] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.859885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.877575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.877594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.895235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.895253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.910653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.910671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.928260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.928284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.945744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.945763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.962521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.962539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.375 [2024-04-27 02:35:57.980403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.375 [2024-04-27 02:35:57.980422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:57.995309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:57.995328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.006541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.006559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.022975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.022993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.040728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.040747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.056125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.056144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.067339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.067357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.084579] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.084597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.100194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.100213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.117725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.117744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.135185] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.135204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.637 [2024-04-27 02:35:58.151064] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.637 [2024-04-27 02:35:58.151082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.638 [2024-04-27 02:35:58.168524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.638 [2024-04-27 02:35:58.168546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.638 [2024-04-27 02:35:58.184848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.638 [2024-04-27 02:35:58.184867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.638 [2024-04-27 02:35:58.202829] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.638 [2024-04-27 02:35:58.202847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.638 [2024-04-27 02:35:58.218750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.638 [2024-04-27 02:35:58.218768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.638 [2024-04-27 02:35:58.236246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.638 [2024-04-27 02:35:58.236265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.638 [2024-04-27 02:35:58.251975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.638 [2024-04-27 02:35:58.251993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.263353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.263371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.280271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.280293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.295282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.295300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.312513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.312531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.326771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.326788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.343556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.343574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.359833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.359851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.377236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.377254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.394808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.394827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.410570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.410588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.422009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.422027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.438627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.438645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.454567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.454586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.465919] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.465941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.482835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.482853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.498937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.498956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.900 [2024-04-27 02:35:58.510633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.900 [2024-04-27 02:35:58.510651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.527195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.527213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.544260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.544282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.561451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.561469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.578839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.578857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.596512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.596530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.612365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.612382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.629620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.629638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.645994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.646012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.663177] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.663196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.679172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.679190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.696561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.696579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.712200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.712218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.729427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.729446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.746690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.746708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.762576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.162 [2024-04-27 02:35:58.762594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.162 [2024-04-27 02:35:58.773529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.163 [2024-04-27 02:35:58.773552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.424 [2024-04-27 02:35:58.789906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.424 [2024-04-27 02:35:58.789925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.424 [2024-04-27 02:35:58.806624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.424 [2024-04-27 02:35:58.806642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.424 [2024-04-27 02:35:58.823671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.424 [2024-04-27 02:35:58.823690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.424 [2024-04-27 02:35:58.840176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.424 [2024-04-27 02:35:58.840194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.424 [2024-04-27 02:35:58.857898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.424 [2024-04-27 02:35:58.857915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.875404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.875422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.890986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.891005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.902432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.902449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.919365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.919384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.936365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.936383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.952027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.952046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.963122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.963140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.979769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.979787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:58.996486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:58.996505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:59.013456] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:59.013474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.425 [2024-04-27 02:35:59.029880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.425 [2024-04-27 02:35:59.029898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.047312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.047330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.061799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.061817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.078081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.078099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.094189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.094208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.111993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.112012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.129596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.129614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.145369] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.145387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 00:16:25.686 Latency(us) 00:16:25.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.686 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:25.686 Nvme1n1 : 5.01 13230.44 103.36 0.00 0.00 9664.80 4096.00 25449.81 00:16:25.686 =================================================================================================================== 00:16:25.686 Total : 13230.44 103.36 0.00 0.00 9664.80 4096.00 25449.81 00:16:25.686 [2024-04-27 02:35:59.157400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.157418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.169427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.169442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.181464] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.181478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.193493] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.193507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.205523] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.205535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.217554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.217566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.229583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.229594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.241615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.241626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.686 [2024-04-27 02:35:59.253646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.686 [2024-04-27 02:35:59.253657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.687 [2024-04-27 02:35:59.265677] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.687 [2024-04-27 02:35:59.265689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.687 [2024-04-27 02:35:59.277709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.687 [2024-04-27 02:35:59.277720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.687 [2024-04-27 02:35:59.289741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.687 [2024-04-27 02:35:59.289752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (101338) - No such process 00:16:25.687 02:35:59 -- target/zcopy.sh@49 -- # wait 101338 00:16:25.687 02:35:59 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.687 02:35:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.687 02:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:25.948 02:35:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.948 02:35:59 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:25.948 02:35:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.948 02:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:25.948 delay0 00:16:25.948 02:35:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.948 02:35:59 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:25.948 02:35:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.948 02:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:25.948 02:35:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.948 02:35:59 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:25.948 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.948 [2024-04-27 02:35:59.468654] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:32.533 Initializing NVMe Controllers 00:16:32.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:32.533 Initialization complete. Launching workers. 00:16:32.533 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 487 00:16:32.533 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 768, failed to submit 39 00:16:32.533 success 619, unsuccess 149, failed 0 00:16:32.533 02:36:05 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:32.533 02:36:05 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:32.533 02:36:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:32.533 02:36:05 -- nvmf/common.sh@117 -- # sync 00:16:32.533 02:36:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.533 02:36:05 -- nvmf/common.sh@120 -- # set +e 00:16:32.533 02:36:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.533 02:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.533 rmmod nvme_tcp 00:16:32.533 rmmod nvme_fabrics 00:16:32.533 rmmod nvme_keyring 00:16:32.533 02:36:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.533 02:36:05 -- nvmf/common.sh@124 -- # set -e 00:16:32.533 02:36:05 -- nvmf/common.sh@125 -- # return 0 00:16:32.533 02:36:05 -- nvmf/common.sh@478 -- # '[' -n 99200 ']' 00:16:32.533 02:36:05 -- nvmf/common.sh@479 -- # killprocess 99200 00:16:32.533 02:36:05 -- common/autotest_common.sh@936 -- # '[' -z 99200 ']' 00:16:32.533 02:36:05 -- common/autotest_common.sh@940 -- # kill -0 99200 00:16:32.533 02:36:05 -- common/autotest_common.sh@941 -- # uname 00:16:32.533 02:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:32.533 02:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99200 00:16:32.533 02:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:32.533 02:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:32.533 02:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99200' 00:16:32.533 killing process with pid 99200 00:16:32.533 02:36:05 -- common/autotest_common.sh@955 -- # kill 99200 00:16:32.533 02:36:05 -- common/autotest_common.sh@960 -- # wait 99200 00:16:32.533 02:36:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:32.533 02:36:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:32.533 02:36:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:32.533 02:36:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.533 02:36:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.533 02:36:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.533 02:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.533 02:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.085 02:36:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:35.085 00:16:35.085 real 0m33.086s 00:16:35.085 user 0m44.602s 00:16:35.085 sys 0m10.002s 00:16:35.085 02:36:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.085 02:36:08 -- common/autotest_common.sh@10 -- # set +x 00:16:35.085 ************************************ 00:16:35.085 END TEST nvmf_zcopy 00:16:35.085 ************************************ 00:16:35.085 02:36:08 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:35.085 02:36:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:35.085 02:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.085 02:36:08 -- common/autotest_common.sh@10 -- # set +x 00:16:35.085 ************************************ 00:16:35.085 START TEST nvmf_nmic 00:16:35.085 ************************************ 00:16:35.085 02:36:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:35.085 * Looking for test storage... 00:16:35.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.085 02:36:08 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.085 02:36:08 -- nvmf/common.sh@7 -- # uname -s 00:16:35.085 02:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.085 02:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.085 02:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.085 02:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.085 02:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.085 02:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.085 02:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.085 02:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.085 02:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.085 02:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.085 02:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.085 02:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.085 02:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.085 02:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.085 02:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.085 02:36:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.085 02:36:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.085 02:36:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.085 02:36:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.085 02:36:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.085 02:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.086 02:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.086 02:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.086 02:36:08 -- paths/export.sh@5 -- # export PATH 00:16:35.086 02:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.086 02:36:08 -- nvmf/common.sh@47 -- # : 0 00:16:35.086 02:36:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.086 02:36:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.086 02:36:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.086 02:36:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.086 02:36:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.086 02:36:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.086 02:36:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.086 02:36:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.086 02:36:08 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.086 02:36:08 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.086 02:36:08 -- target/nmic.sh@14 -- # nvmftestinit 00:16:35.086 02:36:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:35.086 02:36:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.086 02:36:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:35.086 02:36:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:35.086 02:36:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:35.086 02:36:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.086 02:36:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.086 02:36:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.086 02:36:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:35.086 02:36:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:35.086 02:36:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.086 02:36:08 -- common/autotest_common.sh@10 -- # set +x 00:16:41.768 02:36:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:41.768 02:36:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.768 02:36:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.768 02:36:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.768 02:36:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.768 02:36:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.768 02:36:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.768 02:36:14 -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.768 02:36:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.768 02:36:14 -- nvmf/common.sh@296 -- # e810=() 00:16:41.768 02:36:14 -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.768 02:36:14 -- nvmf/common.sh@297 -- # x722=() 00:16:41.768 02:36:14 -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.768 02:36:14 -- nvmf/common.sh@298 -- # mlx=() 00:16:41.768 02:36:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.768 02:36:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.768 02:36:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.768 02:36:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.768 02:36:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.768 02:36:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.768 02:36:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.769 02:36:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.769 02:36:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.769 02:36:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.769 02:36:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.769 02:36:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.769 02:36:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.769 02:36:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.769 02:36:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.769 02:36:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.769 02:36:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:41.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:41.769 02:36:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.769 02:36:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:41.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:41.769 02:36:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.769 02:36:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.769 02:36:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.769 02:36:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.769 02:36:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.769 02:36:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:41.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:41.769 02:36:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.769 02:36:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.769 02:36:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.769 02:36:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.769 02:36:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.769 02:36:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:41.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:41.769 02:36:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.769 02:36:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:41.769 02:36:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:41.769 02:36:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:41.769 02:36:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:41.769 02:36:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.769 02:36:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.769 02:36:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.769 02:36:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.769 02:36:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.769 02:36:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.769 02:36:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.769 02:36:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.769 02:36:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.769 02:36:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.769 02:36:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.769 02:36:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.769 02:36:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.769 02:36:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.769 02:36:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.769 02:36:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.769 02:36:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.769 02:36:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.769 02:36:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.769 02:36:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:16:41.769 00:16:41.769 --- 10.0.0.2 ping statistics --- 00:16:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.769 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:16:41.769 02:36:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:16:41.769 00:16:41.769 --- 10.0.0.1 ping statistics --- 00:16:41.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.769 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:16:41.769 02:36:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.769 02:36:15 -- nvmf/common.sh@411 -- # return 0 00:16:41.769 02:36:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:41.769 02:36:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.769 02:36:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:41.769 02:36:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:41.769 02:36:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.769 02:36:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:41.769 02:36:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:41.769 02:36:15 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:41.769 02:36:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:41.769 02:36:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:41.769 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.769 02:36:15 -- nvmf/common.sh@470 -- # nvmfpid=108414 00:16:41.769 02:36:15 -- nvmf/common.sh@471 -- # waitforlisten 108414 00:16:41.769 02:36:15 -- common/autotest_common.sh@817 -- # '[' -z 108414 ']' 00:16:41.769 02:36:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.769 02:36:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.769 02:36:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.769 02:36:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.769 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.769 02:36:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.769 [2024-04-27 02:36:15.207711] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:41.769 [2024-04-27 02:36:15.207761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.769 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.769 [2024-04-27 02:36:15.268803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.769 [2024-04-27 02:36:15.335704] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.769 [2024-04-27 02:36:15.335739] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.769 [2024-04-27 02:36:15.335748] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.769 [2024-04-27 02:36:15.335755] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.769 [2024-04-27 02:36:15.335762] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.769 [2024-04-27 02:36:15.335892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.769 [2024-04-27 02:36:15.336007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.769 [2024-04-27 02:36:15.336133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.769 [2024-04-27 02:36:15.336135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.713 02:36:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.713 02:36:15 -- common/autotest_common.sh@850 -- # return 0 00:16:42.713 02:36:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:42.713 02:36:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:42.713 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 02:36:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.713 02:36:16 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 [2024-04-27 02:36:16.044933] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 Malloc0 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 [2024-04-27 02:36:16.104301] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:42.713 test case1: single bdev can't be used in multiple subsystems 00:16:42.713 02:36:16 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@28 -- # nmic_status=0 00:16:42.713 02:36:16 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 [2024-04-27 02:36:16.140266] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:42.713 [2024-04-27 02:36:16.140288] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:42.713 [2024-04-27 02:36:16.140296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.713 request: 00:16:42.713 { 00:16:42.713 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:42.713 "namespace": { 00:16:42.713 "bdev_name": "Malloc0", 00:16:42.713 "no_auto_visible": false 00:16:42.713 }, 00:16:42.713 "method": "nvmf_subsystem_add_ns", 00:16:42.713 "req_id": 1 00:16:42.713 } 00:16:42.713 Got JSON-RPC error response 00:16:42.713 response: 00:16:42.713 { 00:16:42.713 "code": -32602, 00:16:42.713 "message": "Invalid parameters" 00:16:42.713 } 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@29 -- # nmic_status=1 00:16:42.713 02:36:16 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:42.713 02:36:16 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:42.713 Adding namespace failed - expected result. 00:16:42.713 02:36:16 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:42.713 test case2: host connect to nvmf target in multiple paths 00:16:42.713 02:36:16 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:42.713 02:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.713 02:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 [2024-04-27 02:36:16.152405] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:42.713 02:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.713 02:36:16 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.099 02:36:17 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:46.017 02:36:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.017 02:36:19 -- common/autotest_common.sh@1184 -- # local i=0 00:16:46.017 02:36:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.017 02:36:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:46.017 02:36:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:47.966 02:36:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:47.966 02:36:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:47.966 02:36:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.966 02:36:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:47.966 02:36:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.966 02:36:21 -- common/autotest_common.sh@1194 -- # return 0 00:16:47.966 02:36:21 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:47.966 [global] 00:16:47.966 thread=1 00:16:47.966 invalidate=1 00:16:47.966 rw=write 00:16:47.966 time_based=1 00:16:47.966 runtime=1 00:16:47.966 ioengine=libaio 00:16:47.966 direct=1 00:16:47.966 bs=4096 00:16:47.966 iodepth=1 00:16:47.966 norandommap=0 00:16:47.966 numjobs=1 00:16:47.966 00:16:47.966 verify_dump=1 00:16:47.966 verify_backlog=512 00:16:47.966 verify_state_save=0 00:16:47.966 do_verify=1 00:16:47.966 verify=crc32c-intel 00:16:47.966 [job0] 00:16:47.966 filename=/dev/nvme0n1 00:16:47.966 Could not set queue depth (nvme0n1) 00:16:48.233 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.233 fio-3.35 00:16:48.233 Starting 1 thread 00:16:49.620 00:16:49.620 job0: (groupid=0, jobs=1): err= 0: pid=109801: Sat Apr 27 02:36:22 2024 00:16:49.620 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:49.620 slat (nsec): min=6651, max=53836, avg=24378.01, stdev=3500.66 00:16:49.620 clat (usec): min=726, max=1599, avg=1061.70, stdev=124.89 00:16:49.620 lat (usec): min=750, max=1623, avg=1086.08, stdev=125.01 00:16:49.620 clat percentiles (usec): 00:16:49.620 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 955], 00:16:49.620 | 30.00th=[ 988], 40.00th=[ 1045], 50.00th=[ 1106], 60.00th=[ 1123], 00:16:49.620 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1205], 00:16:49.620 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1598], 99.95th=[ 1598], 00:16:49.620 | 99.99th=[ 1598] 00:16:49.620 write: IOPS=551, BW=2206KiB/s (2259kB/s)(2208KiB/1001msec); 0 zone resets 00:16:49.620 slat (nsec): min=9577, max=82055, avg=27714.47, stdev=8611.63 00:16:49.620 clat (usec): min=412, max=935, avg=762.18, stdev=80.37 00:16:49.620 lat (usec): min=425, max=966, avg=789.90, stdev=84.35 00:16:49.620 clat percentiles (usec): 00:16:49.620 | 1.00th=[ 537], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 717], 00:16:49.620 | 30.00th=[ 734], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 799], 00:16:49.620 | 70.00th=[ 824], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 873], 00:16:49.620 | 99.00th=[ 906], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 938], 00:16:49.620 | 99.99th=[ 938] 00:16:49.620 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:49.620 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:49.620 lat (usec) : 500=0.09%, 750=24.25%, 1000=43.80% 00:16:49.620 lat (msec) : 2=31.86% 00:16:49.620 cpu : usr=1.30%, sys=3.20%, ctx=1064, majf=0, minf=1 00:16:49.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:49.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.620 issued rwts: total=512,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:49.620 00:16:49.620 Run status group 0 (all jobs): 00:16:49.620 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:49.620 WRITE: bw=2206KiB/s (2259kB/s), 2206KiB/s-2206KiB/s (2259kB/s-2259kB/s), io=2208KiB (2261kB), run=1001-1001msec 00:16:49.620 00:16:49.620 Disk stats (read/write): 00:16:49.620 nvme0n1: ios=499/512, merge=0/0, ticks=538/379, in_queue=917, util=94.59% 00:16:49.620 02:36:22 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:49.620 02:36:22 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.620 02:36:22 -- common/autotest_common.sh@1205 -- # local i=0 00:16:49.620 02:36:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:49.620 02:36:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.620 02:36:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:49.620 02:36:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.620 02:36:22 -- common/autotest_common.sh@1217 -- # return 0 00:16:49.620 02:36:22 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:49.620 02:36:22 -- target/nmic.sh@53 -- # nvmftestfini 00:16:49.620 02:36:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:49.620 02:36:22 -- nvmf/common.sh@117 -- # sync 00:16:49.620 02:36:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.620 02:36:22 -- nvmf/common.sh@120 -- # set +e 00:16:49.620 02:36:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.620 02:36:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.620 rmmod nvme_tcp 00:16:49.620 rmmod nvme_fabrics 00:16:49.620 rmmod nvme_keyring 00:16:49.620 02:36:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.620 02:36:23 -- nvmf/common.sh@124 -- # set -e 00:16:49.620 02:36:23 -- nvmf/common.sh@125 -- # return 0 00:16:49.620 02:36:23 -- nvmf/common.sh@478 -- # '[' -n 108414 ']' 00:16:49.620 02:36:23 -- nvmf/common.sh@479 -- # killprocess 108414 00:16:49.620 02:36:23 -- common/autotest_common.sh@936 -- # '[' -z 108414 ']' 00:16:49.620 02:36:23 -- common/autotest_common.sh@940 -- # kill -0 108414 00:16:49.621 02:36:23 -- common/autotest_common.sh@941 -- # uname 00:16:49.621 02:36:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.621 02:36:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108414 00:16:49.621 02:36:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:49.621 02:36:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:49.621 02:36:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108414' 00:16:49.621 killing process with pid 108414 00:16:49.621 02:36:23 -- common/autotest_common.sh@955 -- # kill 108414 00:16:49.621 02:36:23 -- common/autotest_common.sh@960 -- # wait 108414 00:16:49.621 02:36:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:49.621 02:36:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:49.621 02:36:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:49.621 02:36:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.621 02:36:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.621 02:36:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.621 02:36:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.621 02:36:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.168 02:36:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.168 00:16:52.168 real 0m16.974s 00:16:52.168 user 0m48.777s 00:16:52.168 sys 0m5.804s 00:16:52.168 02:36:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.168 02:36:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 ************************************ 00:16:52.168 END TEST nvmf_nmic 00:16:52.168 ************************************ 00:16:52.168 02:36:25 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:52.168 02:36:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:52.168 02:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.168 02:36:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 ************************************ 00:16:52.168 START TEST nvmf_fio_target 00:16:52.168 ************************************ 00:16:52.168 02:36:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:52.168 * Looking for test storage... 00:16:52.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.168 02:36:25 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.168 02:36:25 -- nvmf/common.sh@7 -- # uname -s 00:16:52.168 02:36:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.168 02:36:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.168 02:36:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.168 02:36:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.168 02:36:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.168 02:36:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.168 02:36:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.168 02:36:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.168 02:36:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.168 02:36:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.168 02:36:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.168 02:36:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.168 02:36:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.168 02:36:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.168 02:36:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.168 02:36:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.168 02:36:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.168 02:36:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.168 02:36:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.168 02:36:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.168 02:36:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.168 02:36:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.168 02:36:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.168 02:36:25 -- paths/export.sh@5 -- # export PATH 00:16:52.169 02:36:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.169 02:36:25 -- nvmf/common.sh@47 -- # : 0 00:16:52.169 02:36:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.169 02:36:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.169 02:36:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.169 02:36:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.169 02:36:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.169 02:36:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.169 02:36:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.169 02:36:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.169 02:36:25 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.169 02:36:25 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.169 02:36:25 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.169 02:36:25 -- target/fio.sh@16 -- # nvmftestinit 00:16:52.169 02:36:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:52.169 02:36:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.169 02:36:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:52.169 02:36:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:52.169 02:36:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:52.169 02:36:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.169 02:36:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.169 02:36:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.169 02:36:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:52.169 02:36:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:52.169 02:36:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.169 02:36:25 -- common/autotest_common.sh@10 -- # set +x 00:17:00.320 02:36:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:00.320 02:36:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:00.320 02:36:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:00.320 02:36:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:00.320 02:36:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:00.320 02:36:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:00.320 02:36:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:00.320 02:36:32 -- nvmf/common.sh@295 -- # net_devs=() 00:17:00.320 02:36:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:00.320 02:36:32 -- nvmf/common.sh@296 -- # e810=() 00:17:00.320 02:36:32 -- nvmf/common.sh@296 -- # local -ga e810 00:17:00.320 02:36:32 -- nvmf/common.sh@297 -- # x722=() 00:17:00.320 02:36:32 -- nvmf/common.sh@297 -- # local -ga x722 00:17:00.320 02:36:32 -- nvmf/common.sh@298 -- # mlx=() 00:17:00.320 02:36:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:00.320 02:36:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.320 02:36:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:00.320 02:36:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:00.320 02:36:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:00.320 02:36:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.320 02:36:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:00.320 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:00.320 02:36:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.320 02:36:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:00.320 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:00.320 02:36:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:00.320 02:36:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.320 02:36:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.320 02:36:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:00.320 02:36:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.320 02:36:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:00.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:00.320 02:36:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.320 02:36:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.320 02:36:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.320 02:36:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:00.320 02:36:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.320 02:36:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:00.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:00.320 02:36:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.320 02:36:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:00.320 02:36:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:00.320 02:36:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:00.320 02:36:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:00.320 02:36:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.320 02:36:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.320 02:36:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.320 02:36:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:00.321 02:36:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.321 02:36:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.321 02:36:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:00.321 02:36:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.321 02:36:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.321 02:36:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:00.321 02:36:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:00.321 02:36:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.321 02:36:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.321 02:36:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.321 02:36:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.321 02:36:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:00.321 02:36:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.321 02:36:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.321 02:36:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.321 02:36:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:00.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:17:00.321 00:17:00.321 --- 10.0.0.2 ping statistics --- 00:17:00.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.321 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:17:00.321 02:36:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:17:00.321 00:17:00.321 --- 10.0.0.1 ping statistics --- 00:17:00.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.321 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:17:00.321 02:36:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.321 02:36:32 -- nvmf/common.sh@411 -- # return 0 00:17:00.321 02:36:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:00.321 02:36:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.321 02:36:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:00.321 02:36:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:00.321 02:36:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.321 02:36:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:00.321 02:36:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:00.321 02:36:32 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:00.321 02:36:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:00.321 02:36:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:00.321 02:36:32 -- common/autotest_common.sh@10 -- # set +x 00:17:00.321 02:36:32 -- nvmf/common.sh@470 -- # nvmfpid=114409 00:17:00.321 02:36:32 -- nvmf/common.sh@471 -- # waitforlisten 114409 00:17:00.321 02:36:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.321 02:36:32 -- common/autotest_common.sh@817 -- # '[' -z 114409 ']' 00:17:00.321 02:36:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.321 02:36:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:00.321 02:36:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.321 02:36:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:00.321 02:36:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.321 [2024-04-27 02:36:33.052789] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:00.321 [2024-04-27 02:36:33.052856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.321 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.321 [2024-04-27 02:36:33.125878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.321 [2024-04-27 02:36:33.202123] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.321 [2024-04-27 02:36:33.202165] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.321 [2024-04-27 02:36:33.202174] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.321 [2024-04-27 02:36:33.202181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.321 [2024-04-27 02:36:33.202187] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.321 [2024-04-27 02:36:33.202311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.321 [2024-04-27 02:36:33.202393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.321 [2024-04-27 02:36:33.202514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.321 [2024-04-27 02:36:33.202516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.321 02:36:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:00.321 02:36:33 -- common/autotest_common.sh@850 -- # return 0 00:17:00.321 02:36:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:00.321 02:36:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:00.321 02:36:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.321 02:36:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.321 02:36:33 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.583 [2024-04-27 02:36:34.018319] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.583 02:36:34 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.844 02:36:34 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:00.844 02:36:34 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.844 02:36:34 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:00.844 02:36:34 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.105 02:36:34 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:01.105 02:36:34 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.367 02:36:34 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:01.367 02:36:34 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:01.367 02:36:34 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.628 02:36:35 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:01.628 02:36:35 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.889 02:36:35 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:01.889 02:36:35 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.889 02:36:35 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:01.889 02:36:35 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:02.150 02:36:35 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:02.411 02:36:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:02.411 02:36:35 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.411 02:36:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:02.411 02:36:35 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:02.672 02:36:36 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.672 [2024-04-27 02:36:36.264630] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.933 02:36:36 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:02.933 02:36:36 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:03.193 02:36:36 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:04.581 02:36:38 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:04.581 02:36:38 -- common/autotest_common.sh@1184 -- # local i=0 00:17:04.581 02:36:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.581 02:36:38 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:17:04.581 02:36:38 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:17:04.581 02:36:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:07.135 02:36:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:07.135 02:36:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:07.135 02:36:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.135 02:36:40 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:17:07.135 02:36:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.135 02:36:40 -- common/autotest_common.sh@1194 -- # return 0 00:17:07.135 02:36:40 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:07.135 [global] 00:17:07.135 thread=1 00:17:07.135 invalidate=1 00:17:07.135 rw=write 00:17:07.135 time_based=1 00:17:07.135 runtime=1 00:17:07.135 ioengine=libaio 00:17:07.135 direct=1 00:17:07.135 bs=4096 00:17:07.135 iodepth=1 00:17:07.135 norandommap=0 00:17:07.135 numjobs=1 00:17:07.135 00:17:07.135 verify_dump=1 00:17:07.135 verify_backlog=512 00:17:07.135 verify_state_save=0 00:17:07.135 do_verify=1 00:17:07.135 verify=crc32c-intel 00:17:07.135 [job0] 00:17:07.135 filename=/dev/nvme0n1 00:17:07.135 [job1] 00:17:07.135 filename=/dev/nvme0n2 00:17:07.135 [job2] 00:17:07.135 filename=/dev/nvme0n3 00:17:07.135 [job3] 00:17:07.135 filename=/dev/nvme0n4 00:17:07.135 Could not set queue depth (nvme0n1) 00:17:07.135 Could not set queue depth (nvme0n2) 00:17:07.135 Could not set queue depth (nvme0n3) 00:17:07.135 Could not set queue depth (nvme0n4) 00:17:07.135 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.135 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.135 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.135 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.135 fio-3.35 00:17:07.135 Starting 4 threads 00:17:08.540 00:17:08.540 job0: (groupid=0, jobs=1): err= 0: pid=116056: Sat Apr 27 02:36:41 2024 00:17:08.540 read: IOPS=355, BW=1423KiB/s (1457kB/s)(1424KiB/1001msec) 00:17:08.540 slat (nsec): min=24297, max=91916, avg=25254.24, stdev=3964.59 00:17:08.540 clat (usec): min=1118, max=1684, avg=1434.87, stdev=109.85 00:17:08.540 lat (usec): min=1143, max=1709, avg=1460.13, stdev=109.97 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 1188], 5.00th=[ 1221], 10.00th=[ 1270], 20.00th=[ 1336], 00:17:08.540 | 30.00th=[ 1369], 40.00th=[ 1418], 50.00th=[ 1450], 60.00th=[ 1483], 00:17:08.540 | 70.00th=[ 1516], 80.00th=[ 1532], 90.00th=[ 1565], 95.00th=[ 1582], 00:17:08.540 | 99.00th=[ 1631], 99.50th=[ 1647], 99.90th=[ 1680], 99.95th=[ 1680], 00:17:08.540 | 99.99th=[ 1680] 00:17:08.540 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:08.540 slat (usec): min=10, max=3637, avg=41.02, stdev=159.41 00:17:08.540 clat (usec): min=515, max=1406, avg=875.61, stdev=115.73 00:17:08.540 lat (usec): min=545, max=4693, avg=916.62, stdev=203.75 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 619], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 799], 00:17:08.540 | 30.00th=[ 832], 40.00th=[ 857], 50.00th=[ 873], 60.00th=[ 898], 00:17:08.540 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 996], 95.00th=[ 1106], 00:17:08.540 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:08.540 | 99.99th=[ 1401] 00:17:08.540 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:08.540 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:08.540 lat (usec) : 750=7.03%, 1000=46.20% 00:17:08.540 lat (msec) : 2=46.77% 00:17:08.540 cpu : usr=1.40%, sys=2.70%, ctx=871, majf=0, minf=1 00:17:08.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 issued rwts: total=356,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.540 job1: (groupid=0, jobs=1): err= 0: pid=116057: Sat Apr 27 02:36:41 2024 00:17:08.540 read: IOPS=11, BW=47.2KiB/s (48.3kB/s)(48.0KiB/1017msec) 00:17:08.540 slat (nsec): min=25064, max=25785, avg=25231.50, stdev=194.08 00:17:08.540 clat (usec): min=41871, max=42224, avg=41981.76, stdev=94.25 00:17:08.540 lat (usec): min=41896, max=42249, avg=42006.99, stdev=94.29 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:08.540 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:08.540 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:08.540 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:08.540 | 99.99th=[42206] 00:17:08.540 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:17:08.540 slat (nsec): min=10730, max=51315, avg=33590.98, stdev=2642.09 00:17:08.540 clat (usec): min=657, max=1146, avg=951.95, stdev=69.62 00:17:08.540 lat (usec): min=690, max=1179, avg=985.54, stdev=69.71 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 734], 5.00th=[ 840], 10.00th=[ 865], 20.00th=[ 906], 00:17:08.540 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 971], 00:17:08.540 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:17:08.540 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1139], 99.95th=[ 1139], 00:17:08.540 | 99.99th=[ 1139] 00:17:08.540 bw ( KiB/s): min= 96, max= 4000, per=25.43%, avg=2048.00, stdev=2760.54, samples=2 00:17:08.540 iops : min= 24, max= 1000, avg=512.00, stdev=690.14, samples=2 00:17:08.540 lat (usec) : 750=1.15%, 1000=73.28% 00:17:08.540 lat (msec) : 2=23.28%, 50=2.29% 00:17:08.540 cpu : usr=0.98%, sys=1.48%, ctx=526, majf=0, minf=1 00:17:08.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.540 job2: (groupid=0, jobs=1): err= 0: pid=116058: Sat Apr 27 02:36:41 2024 00:17:08.540 read: IOPS=319, BW=1277KiB/s (1308kB/s)(1280KiB/1002msec) 00:17:08.540 slat (nsec): min=24892, max=72432, avg=25889.38, stdev=3899.60 00:17:08.540 clat (usec): min=1258, max=1734, avg=1472.07, stdev=61.17 00:17:08.540 lat (usec): min=1284, max=1759, avg=1497.96, stdev=61.02 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 1319], 5.00th=[ 1352], 10.00th=[ 1385], 20.00th=[ 1434], 00:17:08.540 | 30.00th=[ 1450], 40.00th=[ 1467], 50.00th=[ 1483], 60.00th=[ 1483], 00:17:08.540 | 70.00th=[ 1500], 80.00th=[ 1516], 90.00th=[ 1532], 95.00th=[ 1549], 00:17:08.540 | 99.00th=[ 1598], 99.50th=[ 1663], 99.90th=[ 1729], 99.95th=[ 1729], 00:17:08.540 | 99.99th=[ 1729] 00:17:08.540 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:08.540 slat (usec): min=10, max=3043, avg=42.46, stdev=138.12 00:17:08.540 clat (usec): min=675, max=1456, avg=957.56, stdev=80.77 00:17:08.540 lat (usec): min=694, max=4100, avg=1000.02, stdev=166.31 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 734], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 889], 00:17:08.540 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:17:08.540 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:17:08.540 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1450], 99.95th=[ 1450], 00:17:08.540 | 99.99th=[ 1450] 00:17:08.540 bw ( KiB/s): min= 160, max= 3936, per=25.43%, avg=2048.00, stdev=2670.04, samples=2 00:17:08.540 iops : min= 40, max= 984, avg=512.00, stdev=667.51, samples=2 00:17:08.540 lat (usec) : 750=0.84%, 1000=41.23% 00:17:08.540 lat (msec) : 2=57.93% 00:17:08.540 cpu : usr=0.90%, sys=3.10%, ctx=837, majf=0, minf=1 00:17:08.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 issued rwts: total=320,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.540 job3: (groupid=0, jobs=1): err= 0: pid=116059: Sat Apr 27 02:36:41 2024 00:17:08.540 read: IOPS=150, BW=603KiB/s (618kB/s)(604KiB/1001msec) 00:17:08.540 slat (nsec): min=24912, max=45034, avg=25780.37, stdev=2449.96 00:17:08.540 clat (usec): min=1234, max=42102, avg=3332.33, stdev=8533.17 00:17:08.540 lat (usec): min=1259, max=42128, avg=3358.11, stdev=8533.13 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 1287], 5.00th=[ 1336], 10.00th=[ 1352], 20.00th=[ 1385], 00:17:08.540 | 30.00th=[ 1418], 40.00th=[ 1434], 50.00th=[ 1467], 60.00th=[ 1500], 00:17:08.540 | 70.00th=[ 1516], 80.00th=[ 1532], 90.00th=[ 1565], 95.00th=[ 1663], 00:17:08.540 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:08.540 | 99.99th=[42206] 00:17:08.540 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:08.540 slat (nsec): min=3673, max=52361, avg=31514.08, stdev=7724.35 00:17:08.540 clat (usec): min=667, max=1175, avg=922.31, stdev=89.66 00:17:08.540 lat (usec): min=679, max=1209, avg=953.83, stdev=93.22 00:17:08.540 clat percentiles (usec): 00:17:08.540 | 1.00th=[ 676], 5.00th=[ 758], 10.00th=[ 799], 20.00th=[ 857], 00:17:08.540 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[ 930], 60.00th=[ 947], 00:17:08.540 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:17:08.540 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:08.540 | 99.99th=[ 1172] 00:17:08.540 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:08.540 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:08.540 lat (usec) : 750=3.62%, 1000=59.13% 00:17:08.540 lat (msec) : 2=36.20%, 50=1.06% 00:17:08.540 cpu : usr=0.50%, sys=2.50%, ctx=666, majf=0, minf=1 00:17:08.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.540 issued rwts: total=151,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.540 00:17:08.540 Run status group 0 (all jobs): 00:17:08.540 READ: bw=3300KiB/s (3379kB/s), 47.2KiB/s-1423KiB/s (48.3kB/s-1457kB/s), io=3356KiB (3437kB), run=1001-1017msec 00:17:08.540 WRITE: bw=8055KiB/s (8248kB/s), 2014KiB/s-2046KiB/s (2062kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1017msec 00:17:08.540 00:17:08.540 Disk stats (read/write): 00:17:08.540 nvme0n1: ios=289/512, merge=0/0, ticks=433/427, in_queue=860, util=87.07% 00:17:08.540 nvme0n2: ios=30/512, merge=0/0, ticks=1177/490, in_queue=1667, util=87.86% 00:17:08.541 nvme0n3: ios=261/512, merge=0/0, ticks=457/488, in_queue=945, util=95.25% 00:17:08.541 nvme0n4: ios=63/512, merge=0/0, ticks=1225/481, in_queue=1706, util=94.34% 00:17:08.541 02:36:41 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:08.541 [global] 00:17:08.541 thread=1 00:17:08.541 invalidate=1 00:17:08.541 rw=randwrite 00:17:08.541 time_based=1 00:17:08.541 runtime=1 00:17:08.541 ioengine=libaio 00:17:08.541 direct=1 00:17:08.541 bs=4096 00:17:08.541 iodepth=1 00:17:08.541 norandommap=0 00:17:08.541 numjobs=1 00:17:08.541 00:17:08.541 verify_dump=1 00:17:08.541 verify_backlog=512 00:17:08.541 verify_state_save=0 00:17:08.541 do_verify=1 00:17:08.541 verify=crc32c-intel 00:17:08.541 [job0] 00:17:08.541 filename=/dev/nvme0n1 00:17:08.541 [job1] 00:17:08.541 filename=/dev/nvme0n2 00:17:08.541 [job2] 00:17:08.541 filename=/dev/nvme0n3 00:17:08.541 [job3] 00:17:08.541 filename=/dev/nvme0n4 00:17:08.541 Could not set queue depth (nvme0n1) 00:17:08.541 Could not set queue depth (nvme0n2) 00:17:08.541 Could not set queue depth (nvme0n3) 00:17:08.541 Could not set queue depth (nvme0n4) 00:17:08.807 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.807 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.807 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.807 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.807 fio-3.35 00:17:08.807 Starting 4 threads 00:17:10.229 00:17:10.229 job0: (groupid=0, jobs=1): err= 0: pid=116585: Sat Apr 27 02:36:43 2024 00:17:10.229 read: IOPS=339, BW=1359KiB/s (1391kB/s)(1360KiB/1001msec) 00:17:10.229 slat (nsec): min=24628, max=66385, avg=25804.14, stdev=3376.50 00:17:10.229 clat (usec): min=1033, max=1657, avg=1441.74, stdev=77.62 00:17:10.229 lat (usec): min=1058, max=1681, avg=1467.55, stdev=77.23 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 1270], 5.00th=[ 1319], 10.00th=[ 1352], 20.00th=[ 1369], 00:17:10.229 | 30.00th=[ 1401], 40.00th=[ 1434], 50.00th=[ 1450], 60.00th=[ 1467], 00:17:10.229 | 70.00th=[ 1483], 80.00th=[ 1500], 90.00th=[ 1532], 95.00th=[ 1565], 00:17:10.229 | 99.00th=[ 1614], 99.50th=[ 1614], 99.90th=[ 1663], 99.95th=[ 1663], 00:17:10.229 | 99.99th=[ 1663] 00:17:10.229 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:10.229 slat (nsec): min=9969, max=65121, avg=32174.82, stdev=3001.30 00:17:10.229 clat (usec): min=643, max=1117, avg=931.77, stdev=81.66 00:17:10.229 lat (usec): min=676, max=1148, avg=963.95, stdev=81.83 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 734], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 873], 00:17:10.229 | 30.00th=[ 898], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 955], 00:17:10.229 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:17:10.229 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1123], 99.95th=[ 1123], 00:17:10.229 | 99.99th=[ 1123] 00:17:10.229 bw ( KiB/s): min= 4096, max= 4096, per=50.10%, avg=4096.00, stdev= 0.00, samples=1 00:17:10.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:10.229 lat (usec) : 750=1.53%, 1000=46.36% 00:17:10.229 lat (msec) : 2=52.11% 00:17:10.229 cpu : usr=1.10%, sys=2.90%, ctx=857, majf=0, minf=1 00:17:10.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 issued rwts: total=340,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.229 job1: (groupid=0, jobs=1): err= 0: pid=116586: Sat Apr 27 02:36:43 2024 00:17:10.229 read: IOPS=345, BW=1383KiB/s (1416kB/s)(1384KiB/1001msec) 00:17:10.229 slat (nsec): min=24407, max=45547, avg=25363.07, stdev=2906.90 00:17:10.229 clat (usec): min=951, max=1736, avg=1439.29, stdev=128.88 00:17:10.229 lat (usec): min=976, max=1761, avg=1464.65, stdev=128.86 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 1090], 5.00th=[ 1221], 10.00th=[ 1254], 20.00th=[ 1336], 00:17:10.229 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1450], 60.00th=[ 1500], 00:17:10.229 | 70.00th=[ 1532], 80.00th=[ 1549], 90.00th=[ 1582], 95.00th=[ 1598], 00:17:10.229 | 99.00th=[ 1647], 99.50th=[ 1713], 99.90th=[ 1745], 99.95th=[ 1745], 00:17:10.229 | 99.99th=[ 1745] 00:17:10.229 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:10.229 slat (nsec): min=10810, max=50710, avg=32151.61, stdev=4043.91 00:17:10.229 clat (usec): min=578, max=1285, avg=917.90, stdev=130.43 00:17:10.229 lat (usec): min=610, max=1317, avg=950.05, stdev=130.85 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 660], 5.00th=[ 734], 10.00th=[ 775], 20.00th=[ 816], 00:17:10.229 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 898], 60.00th=[ 922], 00:17:10.229 | 70.00th=[ 947], 80.00th=[ 1020], 90.00th=[ 1139], 95.00th=[ 1156], 00:17:10.229 | 99.00th=[ 1254], 99.50th=[ 1254], 99.90th=[ 1287], 99.95th=[ 1287], 00:17:10.229 | 99.99th=[ 1287] 00:17:10.229 bw ( KiB/s): min= 4096, max= 4096, per=50.10%, avg=4096.00, stdev= 0.00, samples=1 00:17:10.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:10.229 lat (usec) : 750=4.43%, 1000=42.66% 00:17:10.229 lat (msec) : 2=52.91% 00:17:10.229 cpu : usr=1.40%, sys=2.60%, ctx=860, majf=0, minf=1 00:17:10.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 issued rwts: total=346,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.229 job2: (groupid=0, jobs=1): err= 0: pid=116587: Sat Apr 27 02:36:43 2024 00:17:10.229 read: IOPS=322, BW=1291KiB/s (1322kB/s)(1292KiB/1001msec) 00:17:10.229 slat (nsec): min=8069, max=86822, avg=25547.75, stdev=4654.83 00:17:10.229 clat (usec): min=1128, max=1601, avg=1479.19, stdev=70.11 00:17:10.229 lat (usec): min=1154, max=1626, avg=1504.74, stdev=70.90 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 1237], 5.00th=[ 1352], 10.00th=[ 1401], 20.00th=[ 1450], 00:17:10.229 | 30.00th=[ 1467], 40.00th=[ 1483], 50.00th=[ 1483], 60.00th=[ 1500], 00:17:10.229 | 70.00th=[ 1516], 80.00th=[ 1532], 90.00th=[ 1549], 95.00th=[ 1565], 00:17:10.229 | 99.00th=[ 1598], 99.50th=[ 1598], 99.90th=[ 1598], 99.95th=[ 1598], 00:17:10.229 | 99.99th=[ 1598] 00:17:10.229 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:10.229 slat (nsec): min=30915, max=68368, avg=32405.93, stdev=2646.42 00:17:10.229 clat (usec): min=706, max=1830, avg=958.50, stdev=87.51 00:17:10.229 lat (usec): min=737, max=1862, avg=990.91, stdev=87.48 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 865], 20.00th=[ 898], 00:17:10.229 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:17:10.229 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1037], 95.00th=[ 1057], 00:17:10.229 | 99.00th=[ 1205], 99.50th=[ 1319], 99.90th=[ 1827], 99.95th=[ 1827], 00:17:10.229 | 99.99th=[ 1827] 00:17:10.229 bw ( KiB/s): min= 4000, max= 4000, per=48.93%, avg=4000.00, stdev= 0.00, samples=1 00:17:10.229 iops : min= 1000, max= 1000, avg=1000.00, stdev= 0.00, samples=1 00:17:10.229 lat (usec) : 750=0.60%, 1000=43.95% 00:17:10.229 lat (msec) : 2=55.45% 00:17:10.229 cpu : usr=1.20%, sys=2.60%, ctx=837, majf=0, minf=1 00:17:10.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 issued rwts: total=323,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.229 job3: (groupid=0, jobs=1): err= 0: pid=116588: Sat Apr 27 02:36:43 2024 00:17:10.229 read: IOPS=344, BW=1377KiB/s (1410kB/s)(1380KiB/1002msec) 00:17:10.229 slat (nsec): min=24509, max=62641, avg=25838.38, stdev=4000.19 00:17:10.229 clat (usec): min=1107, max=1753, avg=1427.52, stdev=95.08 00:17:10.229 lat (usec): min=1132, max=1778, avg=1453.36, stdev=95.23 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 1172], 5.00th=[ 1270], 10.00th=[ 1303], 20.00th=[ 1352], 00:17:10.229 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1434], 60.00th=[ 1450], 00:17:10.229 | 70.00th=[ 1483], 80.00th=[ 1500], 90.00th=[ 1532], 95.00th=[ 1565], 00:17:10.229 | 99.00th=[ 1663], 99.50th=[ 1696], 99.90th=[ 1762], 99.95th=[ 1762], 00:17:10.229 | 99.99th=[ 1762] 00:17:10.229 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:10.229 slat (nsec): min=10105, max=45735, avg=30610.21, stdev=3746.31 00:17:10.229 clat (usec): min=582, max=1318, avg=931.83, stdev=113.36 00:17:10.229 lat (usec): min=615, max=1349, avg=962.44, stdev=114.60 00:17:10.229 clat percentiles (usec): 00:17:10.229 | 1.00th=[ 652], 5.00th=[ 701], 10.00th=[ 775], 20.00th=[ 848], 00:17:10.229 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 971], 00:17:10.229 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:17:10.229 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1319], 99.95th=[ 1319], 00:17:10.229 | 99.99th=[ 1319] 00:17:10.229 bw ( KiB/s): min= 4096, max= 4096, per=50.10%, avg=4096.00, stdev= 0.00, samples=1 00:17:10.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:10.229 lat (usec) : 750=5.02%, 1000=38.39% 00:17:10.229 lat (msec) : 2=56.59% 00:17:10.229 cpu : usr=1.10%, sys=2.80%, ctx=857, majf=0, minf=1 00:17:10.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.229 issued rwts: total=345,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.229 00:17:10.229 Run status group 0 (all jobs): 00:17:10.229 READ: bw=5405KiB/s (5535kB/s), 1291KiB/s-1383KiB/s (1322kB/s-1416kB/s), io=5416KiB (5546kB), run=1001-1002msec 00:17:10.230 WRITE: bw=8176KiB/s (8372kB/s), 2044KiB/s-2046KiB/s (2093kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1002msec 00:17:10.230 00:17:10.230 Disk stats (read/write): 00:17:10.230 nvme0n1: ios=261/512, merge=0/0, ticks=1211/491, in_queue=1702, util=98.40% 00:17:10.230 nvme0n2: ios=261/512, merge=0/0, ticks=1302/453, in_queue=1755, util=97.25% 00:17:10.230 nvme0n3: ios=248/512, merge=0/0, ticks=961/491, in_queue=1452, util=98.63% 00:17:10.230 nvme0n4: ios=235/512, merge=0/0, ticks=325/484, in_queue=809, util=89.55% 00:17:10.230 02:36:43 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:10.230 [global] 00:17:10.230 thread=1 00:17:10.230 invalidate=1 00:17:10.230 rw=write 00:17:10.230 time_based=1 00:17:10.230 runtime=1 00:17:10.230 ioengine=libaio 00:17:10.230 direct=1 00:17:10.230 bs=4096 00:17:10.230 iodepth=128 00:17:10.230 norandommap=0 00:17:10.230 numjobs=1 00:17:10.230 00:17:10.230 verify_dump=1 00:17:10.230 verify_backlog=512 00:17:10.230 verify_state_save=0 00:17:10.230 do_verify=1 00:17:10.230 verify=crc32c-intel 00:17:10.230 [job0] 00:17:10.230 filename=/dev/nvme0n1 00:17:10.230 [job1] 00:17:10.230 filename=/dev/nvme0n2 00:17:10.230 [job2] 00:17:10.230 filename=/dev/nvme0n3 00:17:10.230 [job3] 00:17:10.230 filename=/dev/nvme0n4 00:17:10.230 Could not set queue depth (nvme0n1) 00:17:10.230 Could not set queue depth (nvme0n2) 00:17:10.230 Could not set queue depth (nvme0n3) 00:17:10.230 Could not set queue depth (nvme0n4) 00:17:10.496 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.496 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.496 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.496 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:10.496 fio-3.35 00:17:10.496 Starting 4 threads 00:17:11.922 00:17:11.922 job0: (groupid=0, jobs=1): err= 0: pid=117102: Sat Apr 27 02:36:45 2024 00:17:11.922 read: IOPS=3674, BW=14.4MiB/s (15.0MB/s)(14.5MiB/1010msec) 00:17:11.923 slat (nsec): min=893, max=20724k, avg=116341.62, stdev=816105.36 00:17:11.923 clat (usec): min=2300, max=56772, avg=14118.65, stdev=8655.00 00:17:11.923 lat (usec): min=2312, max=56781, avg=14234.99, stdev=8708.29 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 4817], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 8160], 00:17:11.923 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11600], 60.00th=[13304], 00:17:11.923 | 70.00th=[15664], 80.00th=[17957], 90.00th=[23725], 95.00th=[27132], 00:17:11.923 | 99.00th=[53740], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:17:11.923 | 99.99th=[56886] 00:17:11.923 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:17:11.923 slat (nsec): min=1501, max=16656k, avg=127700.60, stdev=725645.88 00:17:11.923 clat (usec): min=1132, max=85246, avg=18474.39, stdev=11826.72 00:17:11.923 lat (usec): min=1168, max=85255, avg=18602.09, stdev=11863.34 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 2999], 5.00th=[ 6783], 10.00th=[ 7963], 20.00th=[10290], 00:17:11.923 | 30.00th=[12256], 40.00th=[13698], 50.00th=[15533], 60.00th=[17171], 00:17:11.923 | 70.00th=[19530], 80.00th=[23987], 90.00th=[31851], 95.00th=[40109], 00:17:11.923 | 99.00th=[63701], 99.50th=[67634], 99.90th=[85459], 99.95th=[85459], 00:17:11.923 | 99.99th=[85459] 00:17:11.923 bw ( KiB/s): min=15880, max=16880, per=23.57%, avg=16380.00, stdev=707.11, samples=2 00:17:11.923 iops : min= 3970, max= 4220, avg=4095.00, stdev=176.78, samples=2 00:17:11.923 lat (msec) : 2=0.32%, 4=0.54%, 10=24.93%, 20=52.27%, 50=19.34% 00:17:11.923 lat (msec) : 100=2.60% 00:17:11.923 cpu : usr=2.38%, sys=4.06%, ctx=453, majf=0, minf=1 00:17:11.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:11.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.923 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.923 job1: (groupid=0, jobs=1): err= 0: pid=117103: Sat Apr 27 02:36:45 2024 00:17:11.923 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:17:11.923 slat (nsec): min=853, max=11608k, avg=99068.19, stdev=574099.25 00:17:11.923 clat (usec): min=5544, max=58697, avg=13408.33, stdev=5311.63 00:17:11.923 lat (usec): min=5552, max=61778, avg=13507.40, stdev=5310.80 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10683], 00:17:11.923 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[13304], 00:17:11.923 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16712], 95.00th=[19268], 00:17:11.923 | 99.00th=[45876], 99.50th=[48497], 99.90th=[58459], 99.95th=[58459], 00:17:11.923 | 99.99th=[58459] 00:17:11.923 write: IOPS=4728, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1011msec); 0 zone resets 00:17:11.923 slat (nsec): min=1509, max=11872k, avg=109873.36, stdev=615217.89 00:17:11.923 clat (usec): min=2180, max=50679, avg=13780.51, stdev=6291.54 00:17:11.923 lat (usec): min=2833, max=52444, avg=13890.39, stdev=6324.14 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 4817], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9503], 00:17:11.923 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12387], 60.00th=[13304], 00:17:11.923 | 70.00th=[14484], 80.00th=[15795], 90.00th=[20841], 95.00th=[27132], 00:17:11.923 | 99.00th=[38536], 99.50th=[46400], 99.90th=[50594], 99.95th=[50594], 00:17:11.923 | 99.99th=[50594] 00:17:11.923 bw ( KiB/s): min=16744, max=20480, per=26.78%, avg=18612.00, stdev=2641.75, samples=2 00:17:11.923 iops : min= 4186, max= 5120, avg=4653.00, stdev=660.44, samples=2 00:17:11.923 lat (msec) : 4=0.21%, 10=15.64%, 20=76.44%, 50=7.30%, 100=0.42% 00:17:11.923 cpu : usr=2.08%, sys=3.76%, ctx=584, majf=0, minf=1 00:17:11.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:11.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.923 issued rwts: total=4608,4781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.923 job2: (groupid=0, jobs=1): err= 0: pid=117104: Sat Apr 27 02:36:45 2024 00:17:11.923 read: IOPS=4076, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:17:11.923 slat (nsec): min=903, max=52947k, avg=116230.55, stdev=1057147.65 00:17:11.923 clat (usec): min=1707, max=61819, avg=15457.15, stdev=8219.69 00:17:11.923 lat (usec): min=3145, max=64762, avg=15573.38, stdev=8258.92 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 5211], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11469], 00:17:11.923 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13698], 60.00th=[14353], 00:17:11.923 | 70.00th=[15533], 80.00th=[16450], 90.00th=[20317], 95.00th=[27657], 00:17:11.923 | 99.00th=[54789], 99.50th=[55313], 99.90th=[61604], 99.95th=[61604], 00:17:11.923 | 99.99th=[61604] 00:17:11.923 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:17:11.923 slat (nsec): min=1659, max=11253k, avg=114977.53, stdev=605822.69 00:17:11.923 clat (usec): min=1300, max=61723, avg=15525.66, stdev=7418.98 00:17:11.923 lat (usec): min=1347, max=61727, avg=15640.63, stdev=7448.08 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 4948], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[ 9896], 00:17:11.923 | 30.00th=[10683], 40.00th=[11994], 50.00th=[13566], 60.00th=[15008], 00:17:11.923 | 70.00th=[17695], 80.00th=[21103], 90.00th=[26608], 95.00th=[31851], 00:17:11.923 | 99.00th=[36963], 99.50th=[40109], 99.90th=[42730], 99.95th=[61604], 00:17:11.923 | 99.99th=[61604] 00:17:11.923 bw ( KiB/s): min=16176, max=16592, per=23.58%, avg=16384.00, stdev=294.16, samples=2 00:17:11.923 iops : min= 4044, max= 4148, avg=4096.00, stdev=73.54, samples=2 00:17:11.923 lat (msec) : 2=0.02%, 4=0.50%, 10=13.99%, 20=68.63%, 50=15.31% 00:17:11.923 lat (msec) : 100=1.55% 00:17:11.923 cpu : usr=2.10%, sys=4.89%, ctx=433, majf=0, minf=1 00:17:11.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:11.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.923 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.923 job3: (groupid=0, jobs=1): err= 0: pid=117105: Sat Apr 27 02:36:45 2024 00:17:11.923 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.3MiB/1012msec) 00:17:11.923 slat (nsec): min=912, max=14538k, avg=109713.71, stdev=749500.27 00:17:11.923 clat (usec): min=1243, max=42599, avg=14352.49, stdev=4866.26 00:17:11.923 lat (usec): min=4420, max=42608, avg=14462.21, stdev=4912.20 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10421], 00:17:11.923 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13698], 60.00th=[15008], 00:17:11.923 | 70.00th=[15926], 80.00th=[16909], 90.00th=[18482], 95.00th=[22414], 00:17:11.923 | 99.00th=[33424], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:17:11.923 | 99.99th=[42730] 00:17:11.923 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:17:11.923 slat (nsec): min=1613, max=11759k, avg=103763.26, stdev=680407.52 00:17:11.923 clat (usec): min=2467, max=51265, avg=14507.13, stdev=6230.90 00:17:11.923 lat (usec): min=2477, max=52504, avg=14610.89, stdev=6247.01 00:17:11.923 clat percentiles (usec): 00:17:11.923 | 1.00th=[ 4621], 5.00th=[ 6128], 10.00th=[ 7635], 20.00th=[ 9503], 00:17:11.923 | 30.00th=[11207], 40.00th=[12518], 50.00th=[14091], 60.00th=[15008], 00:17:11.923 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20579], 95.00th=[23725], 00:17:11.923 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:17:11.923 | 99.99th=[51119] 00:17:11.923 bw ( KiB/s): min=15952, max=20480, per=26.21%, avg=18216.00, stdev=3201.78, samples=2 00:17:11.923 iops : min= 3988, max= 5120, avg=4554.00, stdev=800.44, samples=2 00:17:11.923 lat (msec) : 2=0.01%, 4=0.23%, 10=18.98%, 20=69.34%, 50=11.43% 00:17:11.923 lat (msec) : 100=0.01% 00:17:11.923 cpu : usr=2.57%, sys=4.95%, ctx=384, majf=0, minf=1 00:17:11.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:11.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.923 issued rwts: total=4170,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.923 00:17:11.923 Run status group 0 (all jobs): 00:17:11.923 READ: bw=64.0MiB/s (67.1MB/s), 14.4MiB/s-17.8MiB/s (15.0MB/s-18.7MB/s), io=64.8MiB (67.9MB), run=1003-1012msec 00:17:11.923 WRITE: bw=67.9MiB/s (71.2MB/s), 15.8MiB/s-18.5MiB/s (16.6MB/s-19.4MB/s), io=68.7MiB (72.0MB), run=1003-1012msec 00:17:11.923 00:17:11.923 Disk stats (read/write): 00:17:11.923 nvme0n1: ios=2989/3072, merge=0/0, ticks=23016/38631, in_queue=61647, util=87.98% 00:17:11.923 nvme0n2: ios=3752/4096, merge=0/0, ticks=17400/19753, in_queue=37153, util=93.27% 00:17:11.923 nvme0n3: ios=3095/3407, merge=0/0, ticks=46554/49928, in_queue=96482, util=97.16% 00:17:11.923 nvme0n4: ios=3701/4096, merge=0/0, ticks=33788/27787, in_queue=61575, util=96.06% 00:17:11.923 02:36:45 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:11.923 [global] 00:17:11.923 thread=1 00:17:11.923 invalidate=1 00:17:11.923 rw=randwrite 00:17:11.923 time_based=1 00:17:11.923 runtime=1 00:17:11.923 ioengine=libaio 00:17:11.923 direct=1 00:17:11.923 bs=4096 00:17:11.923 iodepth=128 00:17:11.923 norandommap=0 00:17:11.923 numjobs=1 00:17:11.923 00:17:11.923 verify_dump=1 00:17:11.923 verify_backlog=512 00:17:11.923 verify_state_save=0 00:17:11.923 do_verify=1 00:17:11.923 verify=crc32c-intel 00:17:11.923 [job0] 00:17:11.923 filename=/dev/nvme0n1 00:17:11.923 [job1] 00:17:11.923 filename=/dev/nvme0n2 00:17:11.923 [job2] 00:17:11.923 filename=/dev/nvme0n3 00:17:11.923 [job3] 00:17:11.923 filename=/dev/nvme0n4 00:17:11.923 Could not set queue depth (nvme0n1) 00:17:11.923 Could not set queue depth (nvme0n2) 00:17:11.923 Could not set queue depth (nvme0n3) 00:17:11.923 Could not set queue depth (nvme0n4) 00:17:12.193 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.193 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.193 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.193 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.193 fio-3.35 00:17:12.193 Starting 4 threads 00:17:13.609 00:17:13.609 job0: (groupid=0, jobs=1): err= 0: pid=117630: Sat Apr 27 02:36:46 2024 00:17:13.609 read: IOPS=5638, BW=22.0MiB/s (23.1MB/s)(22.3MiB/1012msec) 00:17:13.609 slat (nsec): min=873, max=10786k, avg=71676.66, stdev=539261.82 00:17:13.609 clat (usec): min=3363, max=25895, avg=10303.99, stdev=3593.95 00:17:13.609 lat (usec): min=3402, max=25934, avg=10375.67, stdev=3632.32 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 5145], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7504], 00:17:13.609 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10159], 00:17:13.609 | 70.00th=[11076], 80.00th=[12780], 90.00th=[15401], 95.00th=[17433], 00:17:13.609 | 99.00th=[23462], 99.50th=[23987], 99.90th=[25822], 99.95th=[25822], 00:17:13.609 | 99.99th=[25822] 00:17:13.609 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec); 0 zone resets 00:17:13.609 slat (nsec): min=1502, max=13535k, avg=71742.99, stdev=487011.06 00:17:13.609 clat (usec): min=678, max=36353, avg=11326.58, stdev=5868.25 00:17:13.609 lat (usec): min=688, max=36378, avg=11398.33, stdev=5903.05 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 1729], 5.00th=[ 3851], 10.00th=[ 5145], 20.00th=[ 6259], 00:17:13.609 | 30.00th=[ 7177], 40.00th=[ 8455], 50.00th=[10290], 60.00th=[12256], 00:17:13.609 | 70.00th=[13698], 80.00th=[16057], 90.00th=[19792], 95.00th=[22152], 00:17:13.609 | 99.00th=[28181], 99.50th=[31065], 99.90th=[35390], 99.95th=[35390], 00:17:13.609 | 99.99th=[36439] 00:17:13.609 bw ( KiB/s): min=20632, max=28088, per=34.48%, avg=24360.00, stdev=5272.19, samples=2 00:17:13.609 iops : min= 5158, max= 7022, avg=6090.00, stdev=1318.05, samples=2 00:17:13.609 lat (usec) : 750=0.01%, 1000=0.01% 00:17:13.609 lat (msec) : 2=0.57%, 4=2.77%, 10=49.27%, 20=41.38%, 50=5.99% 00:17:13.609 cpu : usr=4.15%, sys=6.53%, ctx=530, majf=0, minf=1 00:17:13.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:13.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.609 issued rwts: total=5706,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.609 job1: (groupid=0, jobs=1): err= 0: pid=117631: Sat Apr 27 02:36:46 2024 00:17:13.609 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:17:13.609 slat (nsec): min=869, max=13369k, avg=141622.68, stdev=872811.00 00:17:13.609 clat (usec): min=3456, max=43728, avg=18509.28, stdev=6797.63 00:17:13.609 lat (usec): min=3466, max=43750, avg=18650.90, stdev=6870.84 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 5473], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[13435], 00:17:13.609 | 30.00th=[14484], 40.00th=[15664], 50.00th=[17171], 60.00th=[19268], 00:17:13.609 | 70.00th=[21103], 80.00th=[24773], 90.00th=[28705], 95.00th=[29754], 00:17:13.609 | 99.00th=[34866], 99.50th=[40109], 99.90th=[40109], 99.95th=[42730], 00:17:13.609 | 99.99th=[43779] 00:17:13.609 write: IOPS=3768, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1012msec); 0 zone resets 00:17:13.609 slat (nsec): min=1474, max=10932k, avg=121965.22, stdev=696727.51 00:17:13.609 clat (usec): min=1586, max=54344, avg=16157.69, stdev=6477.82 00:17:13.609 lat (usec): min=1608, max=54347, avg=16279.65, stdev=6521.23 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11207], 00:17:13.609 | 30.00th=[11994], 40.00th=[13435], 50.00th=[14746], 60.00th=[16450], 00:17:13.609 | 70.00th=[18482], 80.00th=[20055], 90.00th=[23987], 95.00th=[26084], 00:17:13.609 | 99.00th=[44303], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:17:13.609 | 99.99th=[54264] 00:17:13.609 bw ( KiB/s): min=13104, max=16384, per=20.87%, avg=14744.00, stdev=2319.31, samples=2 00:17:13.609 iops : min= 3276, max= 4096, avg=3686.00, stdev=579.83, samples=2 00:17:13.609 lat (msec) : 2=0.04%, 4=0.51%, 10=8.03%, 20=63.83%, 50=27.58% 00:17:13.609 lat (msec) : 100=0.01% 00:17:13.609 cpu : usr=2.87%, sys=3.76%, ctx=345, majf=0, minf=1 00:17:13.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:13.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.609 issued rwts: total=3584,3814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.609 job2: (groupid=0, jobs=1): err= 0: pid=117632: Sat Apr 27 02:36:46 2024 00:17:13.609 read: IOPS=3982, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1008msec) 00:17:13.609 slat (nsec): min=916, max=63773k, avg=130115.63, stdev=1276203.04 00:17:13.609 clat (usec): min=3568, max=90330, avg=16726.88, stdev=13613.93 00:17:13.609 lat (usec): min=3775, max=90335, avg=16857.00, stdev=13683.18 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[10683], 00:17:13.609 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13042], 60.00th=[13829], 00:17:13.609 | 70.00th=[15401], 80.00th=[18744], 90.00th=[23987], 95.00th=[30016], 00:17:13.609 | 99.00th=[88605], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:17:13.609 | 99.99th=[90702] 00:17:13.609 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:17:13.609 slat (nsec): min=1540, max=9140.6k, avg=104894.79, stdev=578348.31 00:17:13.609 clat (usec): min=1238, max=39131, avg=14781.31, stdev=6130.88 00:17:13.609 lat (usec): min=1249, max=39145, avg=14886.21, stdev=6171.53 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 4555], 5.00th=[ 6652], 10.00th=[ 8455], 20.00th=[10159], 00:17:13.609 | 30.00th=[11076], 40.00th=[12256], 50.00th=[13173], 60.00th=[14877], 00:17:13.609 | 70.00th=[16581], 80.00th=[19268], 90.00th=[23200], 95.00th=[26346], 00:17:13.609 | 99.00th=[34341], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:17:13.609 | 99.99th=[39060] 00:17:13.609 bw ( KiB/s): min=12288, max=20480, per=23.19%, avg=16384.00, stdev=5792.62, samples=2 00:17:13.609 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:17:13.609 lat (msec) : 2=0.02%, 4=0.49%, 10=13.55%, 20=68.41%, 50=15.96% 00:17:13.609 lat (msec) : 100=1.57% 00:17:13.609 cpu : usr=2.88%, sys=4.27%, ctx=420, majf=0, minf=1 00:17:13.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:13.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.609 issued rwts: total=4014,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.609 job3: (groupid=0, jobs=1): err= 0: pid=117633: Sat Apr 27 02:36:46 2024 00:17:13.609 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:17:13.609 slat (nsec): min=904, max=12934k, avg=138056.79, stdev=815388.18 00:17:13.609 clat (usec): min=6479, max=46315, avg=17549.04, stdev=7597.84 00:17:13.609 lat (usec): min=6880, max=46324, avg=17687.09, stdev=7612.72 00:17:13.609 clat percentiles (usec): 00:17:13.609 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9634], 00:17:13.609 | 30.00th=[12518], 40.00th=[14615], 50.00th=[17171], 60.00th=[18482], 00:17:13.609 | 70.00th=[20317], 80.00th=[23725], 90.00th=[26084], 95.00th=[30016], 00:17:13.610 | 99.00th=[42206], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:17:13.610 | 99.99th=[46400] 00:17:13.610 write: IOPS=3812, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1002msec); 0 zone resets 00:17:13.610 slat (nsec): min=1551, max=11924k, avg=127021.29, stdev=759050.37 00:17:13.610 clat (usec): min=1610, max=41045, avg=16675.73, stdev=8351.97 00:17:13.610 lat (usec): min=1612, max=41054, avg=16802.76, stdev=8379.24 00:17:13.610 clat percentiles (usec): 00:17:13.610 | 1.00th=[ 3425], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[ 9765], 00:17:13.610 | 30.00th=[10945], 40.00th=[11994], 50.00th=[14222], 60.00th=[16581], 00:17:13.610 | 70.00th=[18482], 80.00th=[25035], 90.00th=[29754], 95.00th=[33424], 00:17:13.610 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:13.610 | 99.99th=[41157] 00:17:13.610 bw ( KiB/s): min=13320, max=16224, per=20.91%, avg=14772.00, stdev=2053.44, samples=2 00:17:13.610 iops : min= 3330, max= 4056, avg=3693.00, stdev=513.36, samples=2 00:17:13.610 lat (msec) : 2=0.32%, 4=0.43%, 10=20.64%, 20=49.88%, 50=28.73% 00:17:13.610 cpu : usr=2.90%, sys=3.60%, ctx=321, majf=0, minf=1 00:17:13.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:13.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.610 issued rwts: total=3584,3820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.610 00:17:13.610 Run status group 0 (all jobs): 00:17:13.610 READ: bw=65.2MiB/s (68.4MB/s), 13.8MiB/s-22.0MiB/s (14.5MB/s-23.1MB/s), io=66.0MiB (69.2MB), run=1002-1012msec 00:17:13.610 WRITE: bw=69.0MiB/s (72.3MB/s), 14.7MiB/s-23.7MiB/s (15.4MB/s-24.9MB/s), io=69.8MiB (73.2MB), run=1002-1012msec 00:17:13.610 00:17:13.610 Disk stats (read/write): 00:17:13.610 nvme0n1: ios=4635/4985, merge=0/0, ticks=42528/50300, in_queue=92828, util=96.79% 00:17:13.610 nvme0n2: ios=3087/3072, merge=0/0, ticks=20124/17036, in_queue=37160, util=89.50% 00:17:13.610 nvme0n3: ios=3346/3584, merge=0/0, ticks=26587/27671, in_queue=54258, util=97.05% 00:17:13.610 nvme0n4: ios=2816/3072, merge=0/0, ticks=14335/12802, in_queue=27137, util=97.33% 00:17:13.610 02:36:46 -- target/fio.sh@55 -- # sync 00:17:13.610 02:36:46 -- target/fio.sh@59 -- # fio_pid=117883 00:17:13.610 02:36:46 -- target/fio.sh@61 -- # sleep 3 00:17:13.610 02:36:46 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:13.610 [global] 00:17:13.610 thread=1 00:17:13.610 invalidate=1 00:17:13.610 rw=read 00:17:13.610 time_based=1 00:17:13.610 runtime=10 00:17:13.610 ioengine=libaio 00:17:13.610 direct=1 00:17:13.610 bs=4096 00:17:13.610 iodepth=1 00:17:13.610 norandommap=1 00:17:13.610 numjobs=1 00:17:13.610 00:17:13.610 [job0] 00:17:13.610 filename=/dev/nvme0n1 00:17:13.610 [job1] 00:17:13.610 filename=/dev/nvme0n2 00:17:13.610 [job2] 00:17:13.610 filename=/dev/nvme0n3 00:17:13.610 [job3] 00:17:13.610 filename=/dev/nvme0n4 00:17:13.610 Could not set queue depth (nvme0n1) 00:17:13.610 Could not set queue depth (nvme0n2) 00:17:13.610 Could not set queue depth (nvme0n3) 00:17:13.610 Could not set queue depth (nvme0n4) 00:17:13.872 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.872 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.872 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.872 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.872 fio-3.35 00:17:13.872 Starting 4 threads 00:17:16.419 02:36:49 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:16.419 02:36:50 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:16.419 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7507968, buflen=4096 00:17:16.419 fio: pid=118159, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.679 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=4419584, buflen=4096 00:17:16.679 fio: pid=118158, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.679 02:36:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.679 02:36:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:16.940 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9125888, buflen=4096 00:17:16.940 fio: pid=118156, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.940 02:36:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.940 02:36:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:16.940 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8224768, buflen=4096 00:17:16.940 fio: pid=118157, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.940 02:36:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.940 02:36:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:16.940 00:17:16.940 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=118156: Sat Apr 27 02:36:50 2024 00:17:16.940 read: IOPS=768, BW=3072KiB/s (3146kB/s)(8912KiB/2901msec) 00:17:16.940 slat (usec): min=6, max=16314, avg=52.62, stdev=601.21 00:17:16.940 clat (usec): min=704, max=2633, avg=1241.76, stdev=150.21 00:17:16.940 lat (usec): min=729, max=17651, avg=1294.39, stdev=620.19 00:17:16.940 clat percentiles (usec): 00:17:16.940 | 1.00th=[ 898], 5.00th=[ 979], 10.00th=[ 1029], 20.00th=[ 1106], 00:17:16.940 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1303], 00:17:16.940 | 70.00th=[ 1336], 80.00th=[ 1369], 90.00th=[ 1418], 95.00th=[ 1450], 00:17:16.940 | 99.00th=[ 1532], 99.50th=[ 1549], 99.90th=[ 1811], 99.95th=[ 1827], 00:17:16.940 | 99.99th=[ 2638] 00:17:16.941 bw ( KiB/s): min= 2896, max= 3440, per=33.41%, avg=3094.40, stdev=232.65, samples=5 00:17:16.941 iops : min= 724, max= 860, avg=773.60, stdev=58.16, samples=5 00:17:16.941 lat (usec) : 750=0.04%, 1000=6.86% 00:17:16.941 lat (msec) : 2=93.00%, 4=0.04% 00:17:16.941 cpu : usr=1.38%, sys=3.21%, ctx=2234, majf=0, minf=1 00:17:16.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.941 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=118157: Sat Apr 27 02:36:50 2024 00:17:16.941 read: IOPS=650, BW=2602KiB/s (2664kB/s)(8032KiB/3087msec) 00:17:16.941 slat (usec): min=6, max=19890, avg=64.23, stdev=783.73 00:17:16.941 clat (usec): min=832, max=42628, avg=1465.90, stdev=2817.68 00:17:16.941 lat (usec): min=858, max=53031, avg=1530.15, stdev=2996.21 00:17:16.941 clat percentiles (usec): 00:17:16.941 | 1.00th=[ 922], 5.00th=[ 996], 10.00th=[ 1057], 20.00th=[ 1139], 00:17:16.941 | 30.00th=[ 1188], 40.00th=[ 1254], 50.00th=[ 1303], 60.00th=[ 1336], 00:17:16.941 | 70.00th=[ 1352], 80.00th=[ 1385], 90.00th=[ 1418], 95.00th=[ 1450], 00:17:16.941 | 99.00th=[ 1680], 99.50th=[ 6783], 99.90th=[42730], 99.95th=[42730], 00:17:16.941 | 99.99th=[42730] 00:17:16.941 bw ( KiB/s): min= 2064, max= 3280, per=30.18%, avg=2795.20, stdev=446.41, samples=5 00:17:16.941 iops : min= 516, max= 820, avg=698.80, stdev=111.60, samples=5 00:17:16.941 lat (usec) : 1000=5.38% 00:17:16.941 lat (msec) : 2=93.93%, 4=0.05%, 10=0.10%, 50=0.50% 00:17:16.941 cpu : usr=1.13%, sys=2.59%, ctx=2016, majf=0, minf=1 00:17:16.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 issued rwts: total=2009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.941 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=118158: Sat Apr 27 02:36:50 2024 00:17:16.941 read: IOPS=395, BW=1580KiB/s (1618kB/s)(4316KiB/2732msec) 00:17:16.941 slat (usec): min=6, max=16431, avg=52.90, stdev=626.73 00:17:16.941 clat (usec): min=1043, max=42098, avg=2471.25, stdev=6209.56 00:17:16.941 lat (usec): min=1070, max=42123, avg=2524.17, stdev=6236.79 00:17:16.941 clat percentiles (usec): 00:17:16.941 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1418], 00:17:16.941 | 30.00th=[ 1467], 40.00th=[ 1500], 50.00th=[ 1516], 60.00th=[ 1549], 00:17:16.941 | 70.00th=[ 1565], 80.00th=[ 1582], 90.00th=[ 1614], 95.00th=[ 1663], 00:17:16.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.941 | 99.99th=[42206] 00:17:16.941 bw ( KiB/s): min= 96, max= 2680, per=16.39%, avg=1518.40, stdev=1316.75, samples=5 00:17:16.941 iops : min= 24, max= 670, avg=379.60, stdev=329.19, samples=5 00:17:16.941 lat (msec) : 2=97.50%, 50=2.41% 00:17:16.941 cpu : usr=0.88%, sys=1.39%, ctx=1082, majf=0, minf=1 00:17:16.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.941 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=118159: Sat Apr 27 02:36:50 2024 00:17:16.941 read: IOPS=705, BW=2821KiB/s (2889kB/s)(7332KiB/2599msec) 00:17:16.941 slat (nsec): min=7150, max=64221, avg=26641.66, stdev=3325.17 00:17:16.941 clat (usec): min=803, max=42306, avg=1384.43, stdev=1344.66 00:17:16.941 lat (usec): min=829, max=42332, avg=1411.07, stdev=1344.65 00:17:16.941 clat percentiles (usec): 00:17:16.941 | 1.00th=[ 1020], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1205], 00:17:16.941 | 30.00th=[ 1237], 40.00th=[ 1270], 50.00th=[ 1319], 60.00th=[ 1401], 00:17:16.941 | 70.00th=[ 1450], 80.00th=[ 1500], 90.00th=[ 1549], 95.00th=[ 1582], 00:17:16.941 | 99.00th=[ 1631], 99.50th=[ 1663], 99.90th=[41157], 99.95th=[42206], 00:17:16.941 | 99.99th=[42206] 00:17:16.941 bw ( KiB/s): min= 2672, max= 3072, per=31.02%, avg=2873.60, stdev=150.88, samples=5 00:17:16.941 iops : min= 668, max= 768, avg=718.40, stdev=37.72, samples=5 00:17:16.941 lat (usec) : 1000=0.76% 00:17:16.941 lat (msec) : 2=99.07%, 50=0.11% 00:17:16.941 cpu : usr=1.46%, sys=2.62%, ctx=1835, majf=0, minf=2 00:17:16.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.941 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.941 00:17:16.941 Run status group 0 (all jobs): 00:17:16.941 READ: bw=9262KiB/s (9484kB/s), 1580KiB/s-3072KiB/s (1618kB/s-3146kB/s), io=27.9MiB (29.3MB), run=2599-3087msec 00:17:16.941 00:17:16.941 Disk stats (read/write): 00:17:16.941 nvme0n1: ios=2175/0, merge=0/0, ticks=2465/0, in_queue=2465, util=92.99% 00:17:16.941 nvme0n2: ios=1960/0, merge=0/0, ticks=2495/0, in_queue=2495, util=93.32% 00:17:16.941 nvme0n3: ios=1006/0, merge=0/0, ticks=2441/0, in_queue=2441, util=96.03% 00:17:16.941 nvme0n4: ios=1832/0, merge=0/0, ticks=2284/0, in_queue=2284, util=96.43% 00:17:17.201 02:36:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.201 02:36:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:17.462 02:36:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.462 02:36:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:17.462 02:36:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.462 02:36:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:17.723 02:36:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.723 02:36:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:17.984 02:36:51 -- target/fio.sh@69 -- # fio_status=0 00:17:17.984 02:36:51 -- target/fio.sh@70 -- # wait 117883 00:17:17.984 02:36:51 -- target/fio.sh@70 -- # fio_status=4 00:17:17.984 02:36:51 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.984 02:36:51 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.984 02:36:51 -- common/autotest_common.sh@1205 -- # local i=0 00:17:17.984 02:36:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:17.984 02:36:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.984 02:36:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.984 02:36:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:17.984 02:36:51 -- common/autotest_common.sh@1217 -- # return 0 00:17:17.984 02:36:51 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:17.984 02:36:51 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:17.984 nvmf hotplug test: fio failed as expected 00:17:17.984 02:36:51 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.247 02:36:51 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:18.247 02:36:51 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:18.247 02:36:51 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:18.247 02:36:51 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:18.247 02:36:51 -- target/fio.sh@91 -- # nvmftestfini 00:17:18.247 02:36:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:18.247 02:36:51 -- nvmf/common.sh@117 -- # sync 00:17:18.247 02:36:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.247 02:36:51 -- nvmf/common.sh@120 -- # set +e 00:17:18.247 02:36:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.247 02:36:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.247 rmmod nvme_tcp 00:17:18.247 rmmod nvme_fabrics 00:17:18.247 rmmod nvme_keyring 00:17:18.247 02:36:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.247 02:36:51 -- nvmf/common.sh@124 -- # set -e 00:17:18.247 02:36:51 -- nvmf/common.sh@125 -- # return 0 00:17:18.247 02:36:51 -- nvmf/common.sh@478 -- # '[' -n 114409 ']' 00:17:18.247 02:36:51 -- nvmf/common.sh@479 -- # killprocess 114409 00:17:18.247 02:36:51 -- common/autotest_common.sh@936 -- # '[' -z 114409 ']' 00:17:18.247 02:36:51 -- common/autotest_common.sh@940 -- # kill -0 114409 00:17:18.247 02:36:51 -- common/autotest_common.sh@941 -- # uname 00:17:18.247 02:36:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.247 02:36:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114409 00:17:18.247 02:36:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:18.247 02:36:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:18.247 02:36:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114409' 00:17:18.247 killing process with pid 114409 00:17:18.247 02:36:51 -- common/autotest_common.sh@955 -- # kill 114409 00:17:18.247 02:36:51 -- common/autotest_common.sh@960 -- # wait 114409 00:17:18.516 02:36:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:18.516 02:36:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:18.516 02:36:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:18.516 02:36:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.516 02:36:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.516 02:36:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.516 02:36:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.516 02:36:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.509 02:36:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:20.509 00:17:20.509 real 0m28.519s 00:17:20.509 user 2m33.939s 00:17:20.509 sys 0m9.236s 00:17:20.509 02:36:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:20.509 02:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:20.509 ************************************ 00:17:20.509 END TEST nvmf_fio_target 00:17:20.509 ************************************ 00:17:20.509 02:36:54 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:20.509 02:36:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:20.509 02:36:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.509 02:36:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.770 ************************************ 00:17:20.770 START TEST nvmf_bdevio 00:17:20.770 ************************************ 00:17:20.770 02:36:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:20.770 * Looking for test storage... 00:17:20.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.770 02:36:54 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.770 02:36:54 -- nvmf/common.sh@7 -- # uname -s 00:17:20.770 02:36:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.770 02:36:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.770 02:36:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.770 02:36:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.770 02:36:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.770 02:36:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.770 02:36:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.770 02:36:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.770 02:36:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.770 02:36:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.770 02:36:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.770 02:36:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.770 02:36:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.770 02:36:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.770 02:36:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.770 02:36:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.770 02:36:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.770 02:36:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.770 02:36:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.770 02:36:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.770 02:36:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.770 02:36:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.770 02:36:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.770 02:36:54 -- paths/export.sh@5 -- # export PATH 00:17:20.770 02:36:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.770 02:36:54 -- nvmf/common.sh@47 -- # : 0 00:17:20.770 02:36:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.770 02:36:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.770 02:36:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.770 02:36:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.770 02:36:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.770 02:36:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.770 02:36:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.770 02:36:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.770 02:36:54 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.770 02:36:54 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.770 02:36:54 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:20.770 02:36:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:20.770 02:36:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.770 02:36:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:20.770 02:36:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:20.770 02:36:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:20.770 02:36:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.770 02:36:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.770 02:36:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.770 02:36:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:20.770 02:36:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:20.770 02:36:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:20.770 02:36:54 -- common/autotest_common.sh@10 -- # set +x 00:17:27.361 02:37:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:27.361 02:37:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.361 02:37:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.361 02:37:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.361 02:37:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.361 02:37:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.361 02:37:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.361 02:37:00 -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.361 02:37:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.361 02:37:00 -- nvmf/common.sh@296 -- # e810=() 00:17:27.361 02:37:00 -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.361 02:37:00 -- nvmf/common.sh@297 -- # x722=() 00:17:27.361 02:37:00 -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.361 02:37:00 -- nvmf/common.sh@298 -- # mlx=() 00:17:27.361 02:37:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.361 02:37:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.361 02:37:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.361 02:37:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.361 02:37:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.361 02:37:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.361 02:37:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:27.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:27.361 02:37:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.361 02:37:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:27.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:27.361 02:37:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.361 02:37:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.361 02:37:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.361 02:37:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:27.361 02:37:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.361 02:37:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:27.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:27.361 02:37:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.361 02:37:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.361 02:37:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.361 02:37:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:27.361 02:37:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.361 02:37:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:27.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:27.361 02:37:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.361 02:37:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:27.361 02:37:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:27.361 02:37:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:27.361 02:37:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:27.361 02:37:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.361 02:37:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.361 02:37:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.361 02:37:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.361 02:37:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.361 02:37:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.361 02:37:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.361 02:37:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.361 02:37:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.361 02:37:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.361 02:37:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.361 02:37:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.361 02:37:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.622 02:37:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.622 02:37:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.622 02:37:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.622 02:37:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.622 02:37:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.622 02:37:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.622 02:37:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:17:27.622 00:17:27.622 --- 10.0.0.2 ping statistics --- 00:17:27.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.622 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:17:27.622 02:37:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:17:27.622 00:17:27.622 --- 10.0.0.1 ping statistics --- 00:17:27.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.622 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:17:27.622 02:37:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.622 02:37:01 -- nvmf/common.sh@411 -- # return 0 00:17:27.622 02:37:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:27.622 02:37:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.623 02:37:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:27.623 02:37:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:27.623 02:37:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.623 02:37:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:27.623 02:37:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:27.623 02:37:01 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:27.623 02:37:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:27.623 02:37:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:27.623 02:37:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.623 02:37:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:27.623 02:37:01 -- nvmf/common.sh@470 -- # nvmfpid=123180 00:17:27.883 02:37:01 -- nvmf/common.sh@471 -- # waitforlisten 123180 00:17:27.883 02:37:01 -- common/autotest_common.sh@817 -- # '[' -z 123180 ']' 00:17:27.883 02:37:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.883 02:37:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.883 02:37:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.883 02:37:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.883 02:37:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.883 [2024-04-27 02:37:01.274502] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:27.883 [2024-04-27 02:37:01.274551] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.883 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.883 [2024-04-27 02:37:01.338770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.883 [2024-04-27 02:37:01.401765] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.883 [2024-04-27 02:37:01.401803] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.883 [2024-04-27 02:37:01.401812] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.883 [2024-04-27 02:37:01.401820] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.883 [2024-04-27 02:37:01.401832] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.883 [2024-04-27 02:37:01.401987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:27.883 [2024-04-27 02:37:01.402136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:27.883 [2024-04-27 02:37:01.402309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.883 [2024-04-27 02:37:01.402309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:28.454 02:37:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:28.454 02:37:02 -- common/autotest_common.sh@850 -- # return 0 00:17:28.455 02:37:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:28.455 02:37:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:28.455 02:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.715 02:37:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.715 02:37:02 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.715 02:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.715 02:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.715 [2024-04-27 02:37:02.108894] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.715 02:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.715 02:37:02 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.715 02:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.715 02:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.715 Malloc0 00:17:28.715 02:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.715 02:37:02 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.715 02:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.715 02:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.715 02:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.715 02:37:02 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.715 02:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.715 02:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.715 02:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.715 02:37:02 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.715 02:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.715 02:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:28.715 [2024-04-27 02:37:02.152243] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.715 02:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.715 02:37:02 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:28.715 02:37:02 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:28.715 02:37:02 -- nvmf/common.sh@521 -- # config=() 00:17:28.715 02:37:02 -- nvmf/common.sh@521 -- # local subsystem config 00:17:28.715 02:37:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:28.715 02:37:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:28.715 { 00:17:28.715 "params": { 00:17:28.715 "name": "Nvme$subsystem", 00:17:28.715 "trtype": "$TEST_TRANSPORT", 00:17:28.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.715 "adrfam": "ipv4", 00:17:28.715 "trsvcid": "$NVMF_PORT", 00:17:28.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.716 "hdgst": ${hdgst:-false}, 00:17:28.716 "ddgst": ${ddgst:-false} 00:17:28.716 }, 00:17:28.716 "method": "bdev_nvme_attach_controller" 00:17:28.716 } 00:17:28.716 EOF 00:17:28.716 )") 00:17:28.716 02:37:02 -- nvmf/common.sh@543 -- # cat 00:17:28.716 02:37:02 -- nvmf/common.sh@545 -- # jq . 00:17:28.716 02:37:02 -- nvmf/common.sh@546 -- # IFS=, 00:17:28.716 02:37:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:28.716 "params": { 00:17:28.716 "name": "Nvme1", 00:17:28.716 "trtype": "tcp", 00:17:28.716 "traddr": "10.0.0.2", 00:17:28.716 "adrfam": "ipv4", 00:17:28.716 "trsvcid": "4420", 00:17:28.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.716 "hdgst": false, 00:17:28.716 "ddgst": false 00:17:28.716 }, 00:17:28.716 "method": "bdev_nvme_attach_controller" 00:17:28.716 }' 00:17:28.716 [2024-04-27 02:37:02.204245] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:28.716 [2024-04-27 02:37:02.204299] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123239 ] 00:17:28.716 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.716 [2024-04-27 02:37:02.262913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.716 [2024-04-27 02:37:02.326919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.716 [2024-04-27 02:37:02.327037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.716 [2024-04-27 02:37:02.327041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.976 I/O targets: 00:17:28.976 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:28.976 00:17:28.976 00:17:28.976 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.976 http://cunit.sourceforge.net/ 00:17:28.976 00:17:28.976 00:17:28.976 Suite: bdevio tests on: Nvme1n1 00:17:28.976 Test: blockdev write read block ...passed 00:17:29.237 Test: blockdev write zeroes read block ...passed 00:17:29.237 Test: blockdev write zeroes read no split ...passed 00:17:29.237 Test: blockdev write zeroes read split ...passed 00:17:29.237 Test: blockdev write zeroes read split partial ...passed 00:17:29.237 Test: blockdev reset ...[2024-04-27 02:37:02.661589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.237 [2024-04-27 02:37:02.661649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1207a20 (9): Bad file descriptor 00:17:29.237 [2024-04-27 02:37:02.804821] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.237 passed 00:17:29.237 Test: blockdev write read 8 blocks ...passed 00:17:29.237 Test: blockdev write read size > 128k ...passed 00:17:29.237 Test: blockdev write read invalid size ...passed 00:17:29.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.237 Test: blockdev write read max offset ...passed 00:17:29.505 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.505 Test: blockdev writev readv 8 blocks ...passed 00:17:29.505 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.505 Test: blockdev writev readv block ...passed 00:17:29.505 Test: blockdev writev readv size > 128k ...passed 00:17:29.505 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.505 Test: blockdev comparev and writev ...[2024-04-27 02:37:03.038487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.038513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.038524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.038530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.039195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.039203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.039213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.039218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.039850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.039858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.039867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.039872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.040513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.040521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.505 [2024-04-27 02:37:03.040530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.505 [2024-04-27 02:37:03.040535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.505 passed 00:17:29.778 Test: blockdev nvme passthru rw ...passed 00:17:29.778 Test: blockdev nvme passthru vendor specific ...[2024-04-27 02:37:03.126302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.778 [2024-04-27 02:37:03.126315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.778 [2024-04-27 02:37:03.126840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.778 [2024-04-27 02:37:03.126848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.778 [2024-04-27 02:37:03.127369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.778 [2024-04-27 02:37:03.127378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.778 [2024-04-27 02:37:03.127863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.778 [2024-04-27 02:37:03.127871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.778 passed 00:17:29.778 Test: blockdev nvme admin passthru ...passed 00:17:29.778 Test: blockdev copy ...passed 00:17:29.778 00:17:29.778 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.778 suites 1 1 n/a 0 0 00:17:29.778 tests 23 23 23 0 0 00:17:29.778 asserts 152 152 152 0 n/a 00:17:29.778 00:17:29.778 Elapsed time = 1.334 seconds 00:17:29.778 02:37:03 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.778 02:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.778 02:37:03 -- common/autotest_common.sh@10 -- # set +x 00:17:29.778 02:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.778 02:37:03 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:29.778 02:37:03 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:29.778 02:37:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:29.778 02:37:03 -- nvmf/common.sh@117 -- # sync 00:17:29.778 02:37:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.778 02:37:03 -- nvmf/common.sh@120 -- # set +e 00:17:29.778 02:37:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.778 02:37:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.778 rmmod nvme_tcp 00:17:29.778 rmmod nvme_fabrics 00:17:29.778 rmmod nvme_keyring 00:17:29.778 02:37:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.039 02:37:03 -- nvmf/common.sh@124 -- # set -e 00:17:30.039 02:37:03 -- nvmf/common.sh@125 -- # return 0 00:17:30.039 02:37:03 -- nvmf/common.sh@478 -- # '[' -n 123180 ']' 00:17:30.039 02:37:03 -- nvmf/common.sh@479 -- # killprocess 123180 00:17:30.039 02:37:03 -- common/autotest_common.sh@936 -- # '[' -z 123180 ']' 00:17:30.039 02:37:03 -- common/autotest_common.sh@940 -- # kill -0 123180 00:17:30.039 02:37:03 -- common/autotest_common.sh@941 -- # uname 00:17:30.039 02:37:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.039 02:37:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123180 00:17:30.039 02:37:03 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:30.039 02:37:03 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:30.039 02:37:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123180' 00:17:30.039 killing process with pid 123180 00:17:30.039 02:37:03 -- common/autotest_common.sh@955 -- # kill 123180 00:17:30.039 02:37:03 -- common/autotest_common.sh@960 -- # wait 123180 00:17:30.039 02:37:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.039 02:37:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.039 02:37:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.039 02:37:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.039 02:37:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.039 02:37:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.039 02:37:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.039 02:37:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.591 02:37:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.591 00:17:32.591 real 0m11.495s 00:17:32.591 user 0m12.954s 00:17:32.591 sys 0m5.596s 00:17:32.591 02:37:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.591 02:37:05 -- common/autotest_common.sh@10 -- # set +x 00:17:32.592 ************************************ 00:17:32.592 END TEST nvmf_bdevio 00:17:32.592 ************************************ 00:17:32.592 02:37:05 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:32.592 02:37:05 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:32.592 02:37:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:32.592 02:37:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.592 02:37:05 -- common/autotest_common.sh@10 -- # set +x 00:17:32.592 ************************************ 00:17:32.592 START TEST nvmf_bdevio_no_huge 00:17:32.592 ************************************ 00:17:32.592 02:37:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:32.592 * Looking for test storage... 00:17:32.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.592 02:37:05 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.592 02:37:05 -- nvmf/common.sh@7 -- # uname -s 00:17:32.592 02:37:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.592 02:37:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.592 02:37:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.592 02:37:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.592 02:37:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.592 02:37:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.592 02:37:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.592 02:37:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.592 02:37:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.592 02:37:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.592 02:37:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.592 02:37:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.592 02:37:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.592 02:37:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.592 02:37:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.592 02:37:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.592 02:37:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.592 02:37:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.592 02:37:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.592 02:37:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.592 02:37:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.592 02:37:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.592 02:37:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.592 02:37:06 -- paths/export.sh@5 -- # export PATH 00:17:32.592 02:37:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.592 02:37:06 -- nvmf/common.sh@47 -- # : 0 00:17:32.592 02:37:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.592 02:37:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.592 02:37:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.592 02:37:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.592 02:37:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.592 02:37:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.592 02:37:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.592 02:37:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.592 02:37:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.592 02:37:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.592 02:37:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:32.592 02:37:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:32.592 02:37:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.592 02:37:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:32.592 02:37:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:32.592 02:37:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:32.592 02:37:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.592 02:37:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.592 02:37:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.592 02:37:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:32.592 02:37:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:32.592 02:37:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.592 02:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:39.189 02:37:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:39.189 02:37:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.189 02:37:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.189 02:37:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.189 02:37:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.189 02:37:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.189 02:37:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.189 02:37:12 -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.189 02:37:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.189 02:37:12 -- nvmf/common.sh@296 -- # e810=() 00:17:39.189 02:37:12 -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.189 02:37:12 -- nvmf/common.sh@297 -- # x722=() 00:17:39.189 02:37:12 -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.189 02:37:12 -- nvmf/common.sh@298 -- # mlx=() 00:17:39.189 02:37:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.189 02:37:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.189 02:37:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.189 02:37:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.189 02:37:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.189 02:37:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.189 02:37:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:39.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:39.189 02:37:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.189 02:37:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:39.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:39.189 02:37:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.189 02:37:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.189 02:37:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.189 02:37:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:39.189 02:37:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.189 02:37:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:39.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:39.189 02:37:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.189 02:37:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.189 02:37:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.189 02:37:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:39.189 02:37:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.189 02:37:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:39.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:39.189 02:37:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.189 02:37:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:39.189 02:37:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:39.189 02:37:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:39.189 02:37:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:39.189 02:37:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.189 02:37:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.189 02:37:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.189 02:37:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.189 02:37:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.189 02:37:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.189 02:37:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.189 02:37:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.189 02:37:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.189 02:37:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.189 02:37:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.189 02:37:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.189 02:37:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.459 02:37:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.459 02:37:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.459 02:37:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.460 02:37:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.460 02:37:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.460 02:37:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.460 02:37:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:17:39.460 00:17:39.460 --- 10.0.0.2 ping statistics --- 00:17:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.460 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:17:39.460 02:37:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:17:39.460 00:17:39.460 --- 10.0.0.1 ping statistics --- 00:17:39.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.460 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:39.460 02:37:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.460 02:37:13 -- nvmf/common.sh@411 -- # return 0 00:17:39.460 02:37:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:39.460 02:37:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.460 02:37:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:39.460 02:37:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:39.460 02:37:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.460 02:37:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:39.460 02:37:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:39.460 02:37:13 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:39.460 02:37:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:39.460 02:37:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:39.460 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:39.460 02:37:13 -- nvmf/common.sh@470 -- # nvmfpid=127797 00:17:39.460 02:37:13 -- nvmf/common.sh@471 -- # waitforlisten 127797 00:17:39.460 02:37:13 -- common/autotest_common.sh@817 -- # '[' -z 127797 ']' 00:17:39.460 02:37:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.460 02:37:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.460 02:37:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:39.460 02:37:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.460 02:37:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.460 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 [2024-04-27 02:37:13.128447] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:39.730 [2024-04-27 02:37:13.128515] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:39.730 [2024-04-27 02:37:13.206113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.730 [2024-04-27 02:37:13.301722] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.730 [2024-04-27 02:37:13.301756] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.730 [2024-04-27 02:37:13.301765] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.730 [2024-04-27 02:37:13.301773] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.730 [2024-04-27 02:37:13.301780] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.730 [2024-04-27 02:37:13.301921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:39.730 [2024-04-27 02:37:13.302070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:39.730 [2024-04-27 02:37:13.302222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.730 [2024-04-27 02:37:13.302223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.300 02:37:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.300 02:37:13 -- common/autotest_common.sh@850 -- # return 0 00:17:40.300 02:37:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:40.300 02:37:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.300 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.560 02:37:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.560 02:37:13 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.560 02:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.560 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.560 [2024-04-27 02:37:13.939075] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.560 02:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.560 02:37:13 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.560 02:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.560 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.560 Malloc0 00:17:40.560 02:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.560 02:37:13 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.560 02:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.560 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.560 02:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.560 02:37:13 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.561 02:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.561 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.561 02:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.561 02:37:13 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.561 02:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.561 02:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.561 [2024-04-27 02:37:13.975357] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.561 02:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.561 02:37:13 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:40.561 02:37:13 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:40.561 02:37:13 -- nvmf/common.sh@521 -- # config=() 00:17:40.561 02:37:13 -- nvmf/common.sh@521 -- # local subsystem config 00:17:40.561 02:37:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:40.561 02:37:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:40.561 { 00:17:40.561 "params": { 00:17:40.561 "name": "Nvme$subsystem", 00:17:40.561 "trtype": "$TEST_TRANSPORT", 00:17:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.561 "adrfam": "ipv4", 00:17:40.561 "trsvcid": "$NVMF_PORT", 00:17:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.561 "hdgst": ${hdgst:-false}, 00:17:40.561 "ddgst": ${ddgst:-false} 00:17:40.561 }, 00:17:40.561 "method": "bdev_nvme_attach_controller" 00:17:40.561 } 00:17:40.561 EOF 00:17:40.561 )") 00:17:40.561 02:37:13 -- nvmf/common.sh@543 -- # cat 00:17:40.561 02:37:13 -- nvmf/common.sh@545 -- # jq . 00:17:40.561 02:37:13 -- nvmf/common.sh@546 -- # IFS=, 00:17:40.561 02:37:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:40.561 "params": { 00:17:40.561 "name": "Nvme1", 00:17:40.561 "trtype": "tcp", 00:17:40.561 "traddr": "10.0.0.2", 00:17:40.561 "adrfam": "ipv4", 00:17:40.561 "trsvcid": "4420", 00:17:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.561 "hdgst": false, 00:17:40.561 "ddgst": false 00:17:40.561 }, 00:17:40.561 "method": "bdev_nvme_attach_controller" 00:17:40.561 }' 00:17:40.561 [2024-04-27 02:37:14.024591] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:40.561 [2024-04-27 02:37:14.024641] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid127903 ] 00:17:40.561 [2024-04-27 02:37:14.086433] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.561 [2024-04-27 02:37:14.175800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.561 [2024-04-27 02:37:14.175919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.561 [2024-04-27 02:37:14.175923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.820 I/O targets: 00:17:40.820 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:40.820 00:17:40.820 00:17:40.820 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.820 http://cunit.sourceforge.net/ 00:17:40.820 00:17:40.820 00:17:40.820 Suite: bdevio tests on: Nvme1n1 00:17:40.820 Test: blockdev write read block ...passed 00:17:41.081 Test: blockdev write zeroes read block ...passed 00:17:41.081 Test: blockdev write zeroes read no split ...passed 00:17:41.081 Test: blockdev write zeroes read split ...passed 00:17:41.081 Test: blockdev write zeroes read split partial ...passed 00:17:41.081 Test: blockdev reset ...[2024-04-27 02:37:14.561255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.081 [2024-04-27 02:37:14.561324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155f900 (9): Bad file descriptor 00:17:41.081 [2024-04-27 02:37:14.591976] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:41.081 passed 00:17:41.081 Test: blockdev write read 8 blocks ...passed 00:17:41.081 Test: blockdev write read size > 128k ...passed 00:17:41.081 Test: blockdev write read invalid size ...passed 00:17:41.081 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:41.081 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:41.081 Test: blockdev write read max offset ...passed 00:17:41.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:41.342 Test: blockdev writev readv 8 blocks ...passed 00:17:41.342 Test: blockdev writev readv 30 x 1block ...passed 00:17:41.342 Test: blockdev writev readv block ...passed 00:17:41.342 Test: blockdev writev readv size > 128k ...passed 00:17:41.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:41.342 Test: blockdev comparev and writev ...[2024-04-27 02:37:14.824327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.824352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.824363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.824369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.824873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.824881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.824891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.824896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.825411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.825419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.825429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.825434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.825943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.825952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.825961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.342 [2024-04-27 02:37:14.825968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:41.342 passed 00:17:41.342 Test: blockdev nvme passthru rw ...passed 00:17:41.342 Test: blockdev nvme passthru vendor specific ...[2024-04-27 02:37:14.912288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.342 [2024-04-27 02:37:14.912301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.912789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.342 [2024-04-27 02:37:14.912797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.913298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.342 [2024-04-27 02:37:14.913306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:41.342 [2024-04-27 02:37:14.913796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.342 [2024-04-27 02:37:14.913804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:41.342 passed 00:17:41.342 Test: blockdev nvme admin passthru ...passed 00:17:41.603 Test: blockdev copy ...passed 00:17:41.603 00:17:41.603 Run Summary: Type Total Ran Passed Failed Inactive 00:17:41.603 suites 1 1 n/a 0 0 00:17:41.603 tests 23 23 23 0 0 00:17:41.603 asserts 152 152 152 0 n/a 00:17:41.603 00:17:41.603 Elapsed time = 1.217 seconds 00:17:41.864 02:37:15 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.864 02:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.864 02:37:15 -- common/autotest_common.sh@10 -- # set +x 00:17:41.864 02:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.864 02:37:15 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:41.864 02:37:15 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:41.864 02:37:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:41.864 02:37:15 -- nvmf/common.sh@117 -- # sync 00:17:41.864 02:37:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.864 02:37:15 -- nvmf/common.sh@120 -- # set +e 00:17:41.864 02:37:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.864 02:37:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.864 rmmod nvme_tcp 00:17:41.864 rmmod nvme_fabrics 00:17:41.864 rmmod nvme_keyring 00:17:41.864 02:37:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.864 02:37:15 -- nvmf/common.sh@124 -- # set -e 00:17:41.864 02:37:15 -- nvmf/common.sh@125 -- # return 0 00:17:41.864 02:37:15 -- nvmf/common.sh@478 -- # '[' -n 127797 ']' 00:17:41.864 02:37:15 -- nvmf/common.sh@479 -- # killprocess 127797 00:17:41.864 02:37:15 -- common/autotest_common.sh@936 -- # '[' -z 127797 ']' 00:17:41.864 02:37:15 -- common/autotest_common.sh@940 -- # kill -0 127797 00:17:41.864 02:37:15 -- common/autotest_common.sh@941 -- # uname 00:17:41.864 02:37:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.864 02:37:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127797 00:17:41.864 02:37:15 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:41.864 02:37:15 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:41.864 02:37:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127797' 00:17:41.864 killing process with pid 127797 00:17:41.864 02:37:15 -- common/autotest_common.sh@955 -- # kill 127797 00:17:41.864 02:37:15 -- common/autotest_common.sh@960 -- # wait 127797 00:17:42.125 02:37:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:42.125 02:37:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:42.125 02:37:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:42.125 02:37:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.125 02:37:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.125 02:37:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.125 02:37:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.125 02:37:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.671 02:37:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.671 00:17:44.671 real 0m11.848s 00:17:44.671 user 0m13.244s 00:17:44.671 sys 0m6.192s 00:17:44.671 02:37:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:44.672 02:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:44.672 ************************************ 00:17:44.672 END TEST nvmf_bdevio_no_huge 00:17:44.672 ************************************ 00:17:44.672 02:37:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:44.672 02:37:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.672 02:37:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.672 02:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:44.672 ************************************ 00:17:44.672 START TEST nvmf_tls 00:17:44.672 ************************************ 00:17:44.672 02:37:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:44.672 * Looking for test storage... 00:17:44.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.672 02:37:18 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.672 02:37:18 -- nvmf/common.sh@7 -- # uname -s 00:17:44.672 02:37:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.672 02:37:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.672 02:37:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.672 02:37:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.672 02:37:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.672 02:37:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.672 02:37:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.672 02:37:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.672 02:37:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.672 02:37:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.672 02:37:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.672 02:37:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.672 02:37:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.672 02:37:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.672 02:37:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.672 02:37:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.672 02:37:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.672 02:37:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.672 02:37:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.672 02:37:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.672 02:37:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.672 02:37:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.672 02:37:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.672 02:37:18 -- paths/export.sh@5 -- # export PATH 00:17:44.672 02:37:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.672 02:37:18 -- nvmf/common.sh@47 -- # : 0 00:17:44.672 02:37:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.672 02:37:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.672 02:37:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.672 02:37:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.672 02:37:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.672 02:37:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.672 02:37:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.672 02:37:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.672 02:37:18 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.672 02:37:18 -- target/tls.sh@62 -- # nvmftestinit 00:17:44.672 02:37:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:44.672 02:37:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.672 02:37:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:44.672 02:37:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:44.672 02:37:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:44.672 02:37:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.672 02:37:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.672 02:37:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.672 02:37:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:44.672 02:37:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:44.672 02:37:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.672 02:37:18 -- common/autotest_common.sh@10 -- # set +x 00:17:51.261 02:37:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.261 02:37:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.261 02:37:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.261 02:37:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.261 02:37:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.261 02:37:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.261 02:37:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.261 02:37:24 -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.261 02:37:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.261 02:37:24 -- nvmf/common.sh@296 -- # e810=() 00:17:51.261 02:37:24 -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.261 02:37:24 -- nvmf/common.sh@297 -- # x722=() 00:17:51.261 02:37:24 -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.261 02:37:24 -- nvmf/common.sh@298 -- # mlx=() 00:17:51.261 02:37:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.261 02:37:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.261 02:37:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.262 02:37:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.262 02:37:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.262 02:37:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.262 02:37:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.262 02:37:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.262 02:37:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:51.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:51.262 02:37:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.262 02:37:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:51.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:51.262 02:37:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.262 02:37:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.262 02:37:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.262 02:37:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:51.262 02:37:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.262 02:37:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:51.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:51.262 02:37:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.262 02:37:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.262 02:37:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.262 02:37:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:51.262 02:37:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.262 02:37:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:51.262 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:51.262 02:37:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.262 02:37:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:51.262 02:37:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:51.262 02:37:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:51.262 02:37:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.262 02:37:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.262 02:37:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.262 02:37:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.262 02:37:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.262 02:37:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.262 02:37:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.262 02:37:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.262 02:37:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.262 02:37:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.262 02:37:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.262 02:37:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.262 02:37:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.262 02:37:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.262 02:37:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.262 02:37:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.262 02:37:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.262 02:37:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.262 02:37:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.262 02:37:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:17:51.262 00:17:51.262 --- 10.0.0.2 ping statistics --- 00:17:51.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.262 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:17:51.262 02:37:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:17:51.262 00:17:51.262 --- 10.0.0.1 ping statistics --- 00:17:51.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.262 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:17:51.262 02:37:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.262 02:37:24 -- nvmf/common.sh@411 -- # return 0 00:17:51.262 02:37:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:51.262 02:37:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.262 02:37:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:51.262 02:37:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.262 02:37:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:51.262 02:37:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:51.524 02:37:24 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:51.524 02:37:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:51.524 02:37:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:51.524 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:17:51.524 02:37:24 -- nvmf/common.sh@470 -- # nvmfpid=132301 00:17:51.524 02:37:24 -- nvmf/common.sh@471 -- # waitforlisten 132301 00:17:51.524 02:37:24 -- common/autotest_common.sh@817 -- # '[' -z 132301 ']' 00:17:51.524 02:37:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.524 02:37:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:51.524 02:37:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.524 02:37:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:51.524 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:17:51.524 02:37:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:51.524 [2024-04-27 02:37:24.949503] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:51.524 [2024-04-27 02:37:24.949552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.524 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.524 [2024-04-27 02:37:25.015834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.524 [2024-04-27 02:37:25.078055] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.524 [2024-04-27 02:37:25.078091] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.524 [2024-04-27 02:37:25.078098] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.524 [2024-04-27 02:37:25.078105] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.524 [2024-04-27 02:37:25.078111] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.524 [2024-04-27 02:37:25.078128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.098 02:37:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:52.098 02:37:25 -- common/autotest_common.sh@850 -- # return 0 00:17:52.098 02:37:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:52.098 02:37:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:52.098 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.359 02:37:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.359 02:37:25 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:52.359 02:37:25 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:52.359 true 00:17:52.359 02:37:25 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.359 02:37:25 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:52.620 02:37:26 -- target/tls.sh@73 -- # version=0 00:17:52.620 02:37:26 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:52.620 02:37:26 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:52.620 02:37:26 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.620 02:37:26 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:52.953 02:37:26 -- target/tls.sh@81 -- # version=13 00:17:52.953 02:37:26 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:52.953 02:37:26 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:52.953 02:37:26 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.953 02:37:26 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:53.213 02:37:26 -- target/tls.sh@89 -- # version=7 00:17:53.213 02:37:26 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:53.213 02:37:26 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.213 02:37:26 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:53.213 02:37:26 -- target/tls.sh@96 -- # ktls=false 00:17:53.213 02:37:26 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:53.213 02:37:26 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:53.475 02:37:26 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:53.475 02:37:26 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.736 02:37:27 -- target/tls.sh@104 -- # ktls=true 00:17:53.736 02:37:27 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:53.736 02:37:27 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:53.736 02:37:27 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.736 02:37:27 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:53.996 02:37:27 -- target/tls.sh@112 -- # ktls=false 00:17:53.996 02:37:27 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:53.996 02:37:27 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:53.996 02:37:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:53.996 02:37:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:53.996 02:37:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:53.996 02:37:27 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:53.996 02:37:27 -- nvmf/common.sh@693 -- # digest=1 00:17:53.996 02:37:27 -- nvmf/common.sh@694 -- # python - 00:17:53.996 02:37:27 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:53.996 02:37:27 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:53.996 02:37:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:53.996 02:37:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:53.996 02:37:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:53.996 02:37:27 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:53.996 02:37:27 -- nvmf/common.sh@693 -- # digest=1 00:17:53.996 02:37:27 -- nvmf/common.sh@694 -- # python - 00:17:53.996 02:37:27 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:53.996 02:37:27 -- target/tls.sh@121 -- # mktemp 00:17:53.996 02:37:27 -- target/tls.sh@121 -- # key_path=/tmp/tmp.55RjcoDZEg 00:17:53.996 02:37:27 -- target/tls.sh@122 -- # mktemp 00:17:53.996 02:37:27 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.udalKqwlIN 00:17:53.996 02:37:27 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:53.996 02:37:27 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:53.996 02:37:27 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.55RjcoDZEg 00:17:53.996 02:37:27 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.udalKqwlIN 00:17:53.996 02:37:27 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:54.257 02:37:27 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:54.518 02:37:27 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.55RjcoDZEg 00:17:54.518 02:37:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.55RjcoDZEg 00:17:54.518 02:37:27 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.518 [2024-04-27 02:37:28.030473] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.518 02:37:28 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.779 02:37:28 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.779 [2024-04-27 02:37:28.335218] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.779 [2024-04-27 02:37:28.335429] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.779 02:37:28 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:55.041 malloc0 00:17:55.041 02:37:28 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.302 02:37:28 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.55RjcoDZEg 00:17:55.302 [2024-04-27 02:37:28.863365] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:55.302 02:37:28 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.55RjcoDZEg 00:17:55.302 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.589 Initializing NVMe Controllers 00:18:07.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.589 Initialization complete. Launching workers. 00:18:07.589 ======================================================== 00:18:07.589 Latency(us) 00:18:07.589 Device Information : IOPS MiB/s Average min max 00:18:07.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13060.72 51.02 4900.93 1052.55 5599.12 00:18:07.589 ======================================================== 00:18:07.589 Total : 13060.72 51.02 4900.93 1052.55 5599.12 00:18:07.589 00:18:07.589 02:37:38 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.55RjcoDZEg 00:18:07.589 02:37:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.589 02:37:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.589 02:37:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.589 02:37:38 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.55RjcoDZEg' 00:18:07.589 02:37:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.589 02:37:38 -- target/tls.sh@28 -- # bdevperf_pid=135198 00:18:07.589 02:37:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.589 02:37:38 -- target/tls.sh@31 -- # waitforlisten 135198 /var/tmp/bdevperf.sock 00:18:07.589 02:37:38 -- common/autotest_common.sh@817 -- # '[' -z 135198 ']' 00:18:07.589 02:37:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.589 02:37:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.589 02:37:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.589 02:37:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.589 02:37:38 -- common/autotest_common.sh@10 -- # set +x 00:18:07.589 02:37:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.590 [2024-04-27 02:37:39.029800] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:07.590 [2024-04-27 02:37:39.029856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135198 ] 00:18:07.590 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.590 [2024-04-27 02:37:39.078935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.590 [2024-04-27 02:37:39.129882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.590 02:37:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.590 02:37:39 -- common/autotest_common.sh@850 -- # return 0 00:18:07.590 02:37:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.55RjcoDZEg 00:18:07.590 [2024-04-27 02:37:39.927001] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.590 [2024-04-27 02:37:39.927056] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:07.590 TLSTESTn1 00:18:07.590 02:37:40 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:07.590 Running I/O for 10 seconds... 00:18:17.608 00:18:17.608 Latency(us) 00:18:17.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.608 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.609 Verification LBA range: start 0x0 length 0x2000 00:18:17.609 TLSTESTn1 : 10.08 1776.95 6.94 0.00 0.00 71765.64 6089.39 145053.01 00:18:17.609 =================================================================================================================== 00:18:17.609 Total : 1776.95 6.94 0.00 0.00 71765.64 6089.39 145053.01 00:18:17.609 0 00:18:17.609 02:37:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.609 02:37:50 -- target/tls.sh@45 -- # killprocess 135198 00:18:17.609 02:37:50 -- common/autotest_common.sh@936 -- # '[' -z 135198 ']' 00:18:17.609 02:37:50 -- common/autotest_common.sh@940 -- # kill -0 135198 00:18:17.609 02:37:50 -- common/autotest_common.sh@941 -- # uname 00:18:17.609 02:37:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.609 02:37:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135198 00:18:17.609 02:37:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:17.609 02:37:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:17.609 02:37:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135198' 00:18:17.609 killing process with pid 135198 00:18:17.609 02:37:50 -- common/autotest_common.sh@955 -- # kill 135198 00:18:17.609 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.609 00:18:17.609 Latency(us) 00:18:17.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.609 =================================================================================================================== 00:18:17.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.609 [2024-04-27 02:37:50.306109] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.609 02:37:50 -- common/autotest_common.sh@960 -- # wait 135198 00:18:17.609 02:37:50 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.udalKqwlIN 00:18:17.609 02:37:50 -- common/autotest_common.sh@638 -- # local es=0 00:18:17.609 02:37:50 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.udalKqwlIN 00:18:17.609 02:37:50 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:17.609 02:37:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.609 02:37:50 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:17.609 02:37:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.609 02:37:50 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.udalKqwlIN 00:18:17.609 02:37:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.609 02:37:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.609 02:37:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.609 02:37:50 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.udalKqwlIN' 00:18:17.609 02:37:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.609 02:37:50 -- target/tls.sh@28 -- # bdevperf_pid=137317 00:18:17.609 02:37:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.609 02:37:50 -- target/tls.sh@31 -- # waitforlisten 137317 /var/tmp/bdevperf.sock 00:18:17.609 02:37:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.609 02:37:50 -- common/autotest_common.sh@817 -- # '[' -z 137317 ']' 00:18:17.609 02:37:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.609 02:37:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.609 02:37:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.609 02:37:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.609 02:37:50 -- common/autotest_common.sh@10 -- # set +x 00:18:17.609 [2024-04-27 02:37:50.467015] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:17.609 [2024-04-27 02:37:50.467070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137317 ] 00:18:17.609 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.609 [2024-04-27 02:37:50.515896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.609 [2024-04-27 02:37:50.565825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.870 02:37:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:17.870 02:37:51 -- common/autotest_common.sh@850 -- # return 0 00:18:17.870 02:37:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.udalKqwlIN 00:18:17.870 [2024-04-27 02:37:51.366853] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.870 [2024-04-27 02:37:51.366909] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.870 [2024-04-27 02:37:51.371236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.870 [2024-04-27 02:37:51.371836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115b060 (107): Transport endpoint is not connected 00:18:17.870 [2024-04-27 02:37:51.372831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115b060 (9): Bad file descriptor 00:18:17.870 [2024-04-27 02:37:51.373832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.870 [2024-04-27 02:37:51.373840] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.870 [2024-04-27 02:37:51.373845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.870 request: 00:18:17.870 { 00:18:17.870 "name": "TLSTEST", 00:18:17.870 "trtype": "tcp", 00:18:17.870 "traddr": "10.0.0.2", 00:18:17.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.870 "adrfam": "ipv4", 00:18:17.870 "trsvcid": "4420", 00:18:17.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.870 "psk": "/tmp/tmp.udalKqwlIN", 00:18:17.870 "method": "bdev_nvme_attach_controller", 00:18:17.870 "req_id": 1 00:18:17.870 } 00:18:17.870 Got JSON-RPC error response 00:18:17.870 response: 00:18:17.870 { 00:18:17.870 "code": -32602, 00:18:17.870 "message": "Invalid parameters" 00:18:17.870 } 00:18:17.870 02:37:51 -- target/tls.sh@36 -- # killprocess 137317 00:18:17.870 02:37:51 -- common/autotest_common.sh@936 -- # '[' -z 137317 ']' 00:18:17.870 02:37:51 -- common/autotest_common.sh@940 -- # kill -0 137317 00:18:17.870 02:37:51 -- common/autotest_common.sh@941 -- # uname 00:18:17.870 02:37:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.870 02:37:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137317 00:18:17.870 02:37:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:17.870 02:37:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:17.870 02:37:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137317' 00:18:17.870 killing process with pid 137317 00:18:17.870 02:37:51 -- common/autotest_common.sh@955 -- # kill 137317 00:18:17.870 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.870 00:18:17.870 Latency(us) 00:18:17.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.870 =================================================================================================================== 00:18:17.870 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.870 [2024-04-27 02:37:51.443241] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.870 02:37:51 -- common/autotest_common.sh@960 -- # wait 137317 00:18:18.133 02:37:51 -- target/tls.sh@37 -- # return 1 00:18:18.133 02:37:51 -- common/autotest_common.sh@641 -- # es=1 00:18:18.133 02:37:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:18.133 02:37:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:18.133 02:37:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:18.133 02:37:51 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.55RjcoDZEg 00:18:18.133 02:37:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:18.133 02:37:51 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.55RjcoDZEg 00:18:18.133 02:37:51 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:18.133 02:37:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:18.133 02:37:51 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:18.133 02:37:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:18.133 02:37:51 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.55RjcoDZEg 00:18:18.133 02:37:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.133 02:37:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.133 02:37:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:18.133 02:37:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.55RjcoDZEg' 00:18:18.133 02:37:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.133 02:37:51 -- target/tls.sh@28 -- # bdevperf_pid=137652 00:18:18.133 02:37:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.133 02:37:51 -- target/tls.sh@31 -- # waitforlisten 137652 /var/tmp/bdevperf.sock 00:18:18.133 02:37:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.133 02:37:51 -- common/autotest_common.sh@817 -- # '[' -z 137652 ']' 00:18:18.133 02:37:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.133 02:37:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.133 02:37:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.133 02:37:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.133 02:37:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.133 [2024-04-27 02:37:51.597143] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:18.133 [2024-04-27 02:37:51.597196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137652 ] 00:18:18.133 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.133 [2024-04-27 02:37:51.647080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.133 [2024-04-27 02:37:51.696866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.076 02:37:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.076 02:37:52 -- common/autotest_common.sh@850 -- # return 0 00:18:19.076 02:37:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.55RjcoDZEg 00:18:19.076 [2024-04-27 02:37:52.505950] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.076 [2024-04-27 02:37:52.506011] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.076 [2024-04-27 02:37:52.512159] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:19.076 [2024-04-27 02:37:52.512183] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:19.076 [2024-04-27 02:37:52.512207] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.076 [2024-04-27 02:37:52.513174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a060 (107): Transport endpoint is not connected 00:18:19.077 [2024-04-27 02:37:52.514168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147a060 (9): Bad file descriptor 00:18:19.077 [2024-04-27 02:37:52.515169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:19.077 [2024-04-27 02:37:52.515177] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.077 [2024-04-27 02:37:52.515183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:19.077 request: 00:18:19.077 { 00:18:19.077 "name": "TLSTEST", 00:18:19.077 "trtype": "tcp", 00:18:19.077 "traddr": "10.0.0.2", 00:18:19.077 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:19.077 "adrfam": "ipv4", 00:18:19.077 "trsvcid": "4420", 00:18:19.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.077 "psk": "/tmp/tmp.55RjcoDZEg", 00:18:19.077 "method": "bdev_nvme_attach_controller", 00:18:19.077 "req_id": 1 00:18:19.077 } 00:18:19.077 Got JSON-RPC error response 00:18:19.077 response: 00:18:19.077 { 00:18:19.077 "code": -32602, 00:18:19.077 "message": "Invalid parameters" 00:18:19.077 } 00:18:19.077 02:37:52 -- target/tls.sh@36 -- # killprocess 137652 00:18:19.077 02:37:52 -- common/autotest_common.sh@936 -- # '[' -z 137652 ']' 00:18:19.077 02:37:52 -- common/autotest_common.sh@940 -- # kill -0 137652 00:18:19.077 02:37:52 -- common/autotest_common.sh@941 -- # uname 00:18:19.077 02:37:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.077 02:37:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137652 00:18:19.077 02:37:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:19.077 02:37:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:19.077 02:37:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137652' 00:18:19.077 killing process with pid 137652 00:18:19.077 02:37:52 -- common/autotest_common.sh@955 -- # kill 137652 00:18:19.077 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.077 00:18:19.077 Latency(us) 00:18:19.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.077 =================================================================================================================== 00:18:19.077 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.077 [2024-04-27 02:37:52.589442] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:19.077 02:37:52 -- common/autotest_common.sh@960 -- # wait 137652 00:18:19.077 02:37:52 -- target/tls.sh@37 -- # return 1 00:18:19.077 02:37:52 -- common/autotest_common.sh@641 -- # es=1 00:18:19.077 02:37:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:19.077 02:37:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:19.077 02:37:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:19.077 02:37:52 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.55RjcoDZEg 00:18:19.077 02:37:52 -- common/autotest_common.sh@638 -- # local es=0 00:18:19.077 02:37:52 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.55RjcoDZEg 00:18:19.077 02:37:52 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:19.077 02:37:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:19.077 02:37:52 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:19.077 02:37:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:19.077 02:37:52 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.55RjcoDZEg 00:18:19.077 02:37:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.338 02:37:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:19.338 02:37:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.338 02:37:52 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.55RjcoDZEg' 00:18:19.338 02:37:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.338 02:37:52 -- target/tls.sh@28 -- # bdevperf_pid=137846 00:18:19.338 02:37:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.338 02:37:52 -- target/tls.sh@31 -- # waitforlisten 137846 /var/tmp/bdevperf.sock 00:18:19.338 02:37:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.338 02:37:52 -- common/autotest_common.sh@817 -- # '[' -z 137846 ']' 00:18:19.338 02:37:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.338 02:37:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.338 02:37:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.338 02:37:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.338 02:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:19.338 [2024-04-27 02:37:52.753614] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:19.338 [2024-04-27 02:37:52.753680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137846 ] 00:18:19.338 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.338 [2024-04-27 02:37:52.804962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.338 [2024-04-27 02:37:52.855038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.908 02:37:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.908 02:37:53 -- common/autotest_common.sh@850 -- # return 0 00:18:19.908 02:37:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.55RjcoDZEg 00:18:20.169 [2024-04-27 02:37:53.640014] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.169 [2024-04-27 02:37:53.640078] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:20.169 [2024-04-27 02:37:53.644973] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:20.169 [2024-04-27 02:37:53.644995] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:20.169 [2024-04-27 02:37:53.645019] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:20.169 [2024-04-27 02:37:53.646133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6f060 (107): Transport endpoint is not connected 00:18:20.169 [2024-04-27 02:37:53.647128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6f060 (9): Bad file descriptor 00:18:20.169 [2024-04-27 02:37:53.648130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:20.169 [2024-04-27 02:37:53.648138] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:20.169 [2024-04-27 02:37:53.648143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:20.169 request: 00:18:20.169 { 00:18:20.169 "name": "TLSTEST", 00:18:20.169 "trtype": "tcp", 00:18:20.169 "traddr": "10.0.0.2", 00:18:20.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.169 "adrfam": "ipv4", 00:18:20.169 "trsvcid": "4420", 00:18:20.169 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:20.169 "psk": "/tmp/tmp.55RjcoDZEg", 00:18:20.169 "method": "bdev_nvme_attach_controller", 00:18:20.169 "req_id": 1 00:18:20.169 } 00:18:20.169 Got JSON-RPC error response 00:18:20.169 response: 00:18:20.169 { 00:18:20.169 "code": -32602, 00:18:20.169 "message": "Invalid parameters" 00:18:20.169 } 00:18:20.169 02:37:53 -- target/tls.sh@36 -- # killprocess 137846 00:18:20.169 02:37:53 -- common/autotest_common.sh@936 -- # '[' -z 137846 ']' 00:18:20.169 02:37:53 -- common/autotest_common.sh@940 -- # kill -0 137846 00:18:20.169 02:37:53 -- common/autotest_common.sh@941 -- # uname 00:18:20.169 02:37:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.169 02:37:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137846 00:18:20.169 02:37:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:20.169 02:37:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:20.169 02:37:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137846' 00:18:20.169 killing process with pid 137846 00:18:20.169 02:37:53 -- common/autotest_common.sh@955 -- # kill 137846 00:18:20.169 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.169 00:18:20.169 Latency(us) 00:18:20.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.169 =================================================================================================================== 00:18:20.169 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.169 [2024-04-27 02:37:53.720517] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:20.169 02:37:53 -- common/autotest_common.sh@960 -- # wait 137846 00:18:20.430 02:37:53 -- target/tls.sh@37 -- # return 1 00:18:20.430 02:37:53 -- common/autotest_common.sh@641 -- # es=1 00:18:20.430 02:37:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:20.430 02:37:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:20.430 02:37:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:20.430 02:37:53 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.430 02:37:53 -- common/autotest_common.sh@638 -- # local es=0 00:18:20.430 02:37:53 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.430 02:37:53 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:20.430 02:37:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:20.430 02:37:53 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:20.430 02:37:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:20.430 02:37:53 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:20.430 02:37:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.430 02:37:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.430 02:37:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:20.430 02:37:53 -- target/tls.sh@23 -- # psk= 00:18:20.430 02:37:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.430 02:37:53 -- target/tls.sh@28 -- # bdevperf_pid=138014 00:18:20.430 02:37:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.430 02:37:53 -- target/tls.sh@31 -- # waitforlisten 138014 /var/tmp/bdevperf.sock 00:18:20.430 02:37:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.430 02:37:53 -- common/autotest_common.sh@817 -- # '[' -z 138014 ']' 00:18:20.430 02:37:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.430 02:37:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.430 02:37:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.430 02:37:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.430 02:37:53 -- common/autotest_common.sh@10 -- # set +x 00:18:20.430 [2024-04-27 02:37:53.873476] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:20.430 [2024-04-27 02:37:53.873529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138014 ] 00:18:20.430 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.430 [2024-04-27 02:37:53.923230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.430 [2024-04-27 02:37:53.973112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.379 02:37:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.379 02:37:54 -- common/autotest_common.sh@850 -- # return 0 00:18:21.379 02:37:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:21.379 [2024-04-27 02:37:54.787804] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:21.379 [2024-04-27 02:37:54.788987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb9b60 (9): Bad file descriptor 00:18:21.379 [2024-04-27 02:37:54.789985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.379 [2024-04-27 02:37:54.789993] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:21.379 [2024-04-27 02:37:54.789999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.379 request: 00:18:21.379 { 00:18:21.379 "name": "TLSTEST", 00:18:21.379 "trtype": "tcp", 00:18:21.379 "traddr": "10.0.0.2", 00:18:21.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.379 "adrfam": "ipv4", 00:18:21.379 "trsvcid": "4420", 00:18:21.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.379 "method": "bdev_nvme_attach_controller", 00:18:21.379 "req_id": 1 00:18:21.379 } 00:18:21.379 Got JSON-RPC error response 00:18:21.379 response: 00:18:21.379 { 00:18:21.379 "code": -32602, 00:18:21.379 "message": "Invalid parameters" 00:18:21.379 } 00:18:21.379 02:37:54 -- target/tls.sh@36 -- # killprocess 138014 00:18:21.379 02:37:54 -- common/autotest_common.sh@936 -- # '[' -z 138014 ']' 00:18:21.379 02:37:54 -- common/autotest_common.sh@940 -- # kill -0 138014 00:18:21.379 02:37:54 -- common/autotest_common.sh@941 -- # uname 00:18:21.379 02:37:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.379 02:37:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138014 00:18:21.379 02:37:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:21.379 02:37:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:21.379 02:37:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138014' 00:18:21.379 killing process with pid 138014 00:18:21.379 02:37:54 -- common/autotest_common.sh@955 -- # kill 138014 00:18:21.379 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.379 00:18:21.379 Latency(us) 00:18:21.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.379 =================================================================================================================== 00:18:21.379 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.379 02:37:54 -- common/autotest_common.sh@960 -- # wait 138014 00:18:21.379 02:37:54 -- target/tls.sh@37 -- # return 1 00:18:21.379 02:37:54 -- common/autotest_common.sh@641 -- # es=1 00:18:21.379 02:37:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:21.379 02:37:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:21.379 02:37:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:21.379 02:37:54 -- target/tls.sh@158 -- # killprocess 132301 00:18:21.379 02:37:54 -- common/autotest_common.sh@936 -- # '[' -z 132301 ']' 00:18:21.379 02:37:54 -- common/autotest_common.sh@940 -- # kill -0 132301 00:18:21.379 02:37:54 -- common/autotest_common.sh@941 -- # uname 00:18:21.379 02:37:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.379 02:37:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132301 00:18:21.640 02:37:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:21.640 02:37:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:21.640 02:37:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132301' 00:18:21.640 killing process with pid 132301 00:18:21.640 02:37:55 -- common/autotest_common.sh@955 -- # kill 132301 00:18:21.640 [2024-04-27 02:37:55.019167] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:21.640 02:37:55 -- common/autotest_common.sh@960 -- # wait 132301 00:18:21.640 02:37:55 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:21.640 02:37:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:21.640 02:37:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:21.640 02:37:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:21.640 02:37:55 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:21.640 02:37:55 -- nvmf/common.sh@693 -- # digest=2 00:18:21.640 02:37:55 -- nvmf/common.sh@694 -- # python - 00:18:21.640 02:37:55 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.640 02:37:55 -- target/tls.sh@160 -- # mktemp 00:18:21.640 02:37:55 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.86E7sGhww9 00:18:21.640 02:37:55 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:21.640 02:37:55 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.86E7sGhww9 00:18:21.640 02:37:55 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:21.640 02:37:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:21.640 02:37:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:21.640 02:37:55 -- common/autotest_common.sh@10 -- # set +x 00:18:21.640 02:37:55 -- nvmf/common.sh@470 -- # nvmfpid=138364 00:18:21.640 02:37:55 -- nvmf/common.sh@471 -- # waitforlisten 138364 00:18:21.640 02:37:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.640 02:37:55 -- common/autotest_common.sh@817 -- # '[' -z 138364 ']' 00:18:21.640 02:37:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.640 02:37:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.640 02:37:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.640 02:37:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.640 02:37:55 -- common/autotest_common.sh@10 -- # set +x 00:18:21.902 [2024-04-27 02:37:55.270898] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:21.902 [2024-04-27 02:37:55.270951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.902 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.902 [2024-04-27 02:37:55.334689] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.902 [2024-04-27 02:37:55.395464] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.902 [2024-04-27 02:37:55.395501] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.902 [2024-04-27 02:37:55.395509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.902 [2024-04-27 02:37:55.395515] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.902 [2024-04-27 02:37:55.395521] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.902 [2024-04-27 02:37:55.395545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.474 02:37:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.474 02:37:56 -- common/autotest_common.sh@850 -- # return 0 00:18:22.474 02:37:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:22.474 02:37:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.474 02:37:56 -- common/autotest_common.sh@10 -- # set +x 00:18:22.736 02:37:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.737 02:37:56 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.86E7sGhww9 00:18:22.737 02:37:56 -- target/tls.sh@49 -- # local key=/tmp/tmp.86E7sGhww9 00:18:22.737 02:37:56 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.737 [2024-04-27 02:37:56.238398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.737 02:37:56 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.997 02:37:56 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.997 [2024-04-27 02:37:56.547177] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.997 [2024-04-27 02:37:56.547402] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.997 02:37:56 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.257 malloc0 00:18:23.257 02:37:56 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.257 02:37:56 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:23.518 [2024-04-27 02:37:56.995022] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:23.518 02:37:57 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.86E7sGhww9 00:18:23.518 02:37:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.518 02:37:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.518 02:37:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.518 02:37:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.86E7sGhww9' 00:18:23.518 02:37:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.518 02:37:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.518 02:37:57 -- target/tls.sh@28 -- # bdevperf_pid=138724 00:18:23.518 02:37:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.518 02:37:57 -- target/tls.sh@31 -- # waitforlisten 138724 /var/tmp/bdevperf.sock 00:18:23.518 02:37:57 -- common/autotest_common.sh@817 -- # '[' -z 138724 ']' 00:18:23.518 02:37:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.518 02:37:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.518 02:37:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.518 02:37:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.518 02:37:57 -- common/autotest_common.sh@10 -- # set +x 00:18:23.518 [2024-04-27 02:37:57.042107] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:23.518 [2024-04-27 02:37:57.042157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138724 ] 00:18:23.518 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.518 [2024-04-27 02:37:57.094789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.779 [2024-04-27 02:37:57.144649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.779 02:37:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.779 02:37:57 -- common/autotest_common.sh@850 -- # return 0 00:18:23.779 02:37:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:23.779 [2024-04-27 02:37:57.352425] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.779 [2024-04-27 02:37:57.352485] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:24.041 TLSTESTn1 00:18:24.041 02:37:57 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.041 Running I/O for 10 seconds... 00:18:34.058 00:18:34.058 Latency(us) 00:18:34.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.058 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.058 Verification LBA range: start 0x0 length 0x2000 00:18:34.058 TLSTESTn1 : 10.07 1785.79 6.98 0.00 0.00 71448.92 6144.00 143305.39 00:18:34.058 =================================================================================================================== 00:18:34.058 Total : 1785.79 6.98 0.00 0.00 71448.92 6144.00 143305.39 00:18:34.058 0 00:18:34.058 02:38:07 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.058 02:38:07 -- target/tls.sh@45 -- # killprocess 138724 00:18:34.058 02:38:07 -- common/autotest_common.sh@936 -- # '[' -z 138724 ']' 00:18:34.058 02:38:07 -- common/autotest_common.sh@940 -- # kill -0 138724 00:18:34.058 02:38:07 -- common/autotest_common.sh@941 -- # uname 00:18:34.058 02:38:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:34.058 02:38:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138724 00:18:34.320 02:38:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:34.320 02:38:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:34.320 02:38:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138724' 00:18:34.320 killing process with pid 138724 00:18:34.320 02:38:07 -- common/autotest_common.sh@955 -- # kill 138724 00:18:34.320 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.320 00:18:34.320 Latency(us) 00:18:34.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.320 =================================================================================================================== 00:18:34.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.320 [2024-04-27 02:38:07.701903] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.320 02:38:07 -- common/autotest_common.sh@960 -- # wait 138724 00:18:34.320 02:38:07 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.86E7sGhww9 00:18:34.320 02:38:07 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.86E7sGhww9 00:18:34.320 02:38:07 -- common/autotest_common.sh@638 -- # local es=0 00:18:34.320 02:38:07 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.86E7sGhww9 00:18:34.320 02:38:07 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:34.320 02:38:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:34.320 02:38:07 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:34.320 02:38:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:34.320 02:38:07 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.86E7sGhww9 00:18:34.320 02:38:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.320 02:38:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.320 02:38:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.320 02:38:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.86E7sGhww9' 00:18:34.320 02:38:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.320 02:38:07 -- target/tls.sh@28 -- # bdevperf_pid=140743 00:18:34.320 02:38:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.320 02:38:07 -- target/tls.sh@31 -- # waitforlisten 140743 /var/tmp/bdevperf.sock 00:18:34.320 02:38:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.320 02:38:07 -- common/autotest_common.sh@817 -- # '[' -z 140743 ']' 00:18:34.320 02:38:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.320 02:38:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.320 02:38:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.320 02:38:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.320 02:38:07 -- common/autotest_common.sh@10 -- # set +x 00:18:34.320 [2024-04-27 02:38:07.867703] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:34.320 [2024-04-27 02:38:07.867756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140743 ] 00:18:34.320 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.320 [2024-04-27 02:38:07.917639] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.581 [2024-04-27 02:38:07.967184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.154 02:38:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:35.154 02:38:08 -- common/autotest_common.sh@850 -- # return 0 00:18:35.154 02:38:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:35.154 [2024-04-27 02:38:08.772401] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.154 [2024-04-27 02:38:08.772443] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:35.154 [2024-04-27 02:38:08.772448] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.86E7sGhww9 00:18:35.415 request: 00:18:35.415 { 00:18:35.415 "name": "TLSTEST", 00:18:35.415 "trtype": "tcp", 00:18:35.415 "traddr": "10.0.0.2", 00:18:35.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.415 "adrfam": "ipv4", 00:18:35.415 "trsvcid": "4420", 00:18:35.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.415 "psk": "/tmp/tmp.86E7sGhww9", 00:18:35.415 "method": "bdev_nvme_attach_controller", 00:18:35.415 "req_id": 1 00:18:35.415 } 00:18:35.415 Got JSON-RPC error response 00:18:35.415 response: 00:18:35.415 { 00:18:35.415 "code": -1, 00:18:35.415 "message": "Operation not permitted" 00:18:35.415 } 00:18:35.415 02:38:08 -- target/tls.sh@36 -- # killprocess 140743 00:18:35.415 02:38:08 -- common/autotest_common.sh@936 -- # '[' -z 140743 ']' 00:18:35.415 02:38:08 -- common/autotest_common.sh@940 -- # kill -0 140743 00:18:35.415 02:38:08 -- common/autotest_common.sh@941 -- # uname 00:18:35.415 02:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.415 02:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140743 00:18:35.415 02:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:35.415 02:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:35.415 02:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140743' 00:18:35.415 killing process with pid 140743 00:18:35.415 02:38:08 -- common/autotest_common.sh@955 -- # kill 140743 00:18:35.415 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.415 00:18:35.415 Latency(us) 00:18:35.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.415 =================================================================================================================== 00:18:35.415 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.415 02:38:08 -- common/autotest_common.sh@960 -- # wait 140743 00:18:35.415 02:38:08 -- target/tls.sh@37 -- # return 1 00:18:35.415 02:38:08 -- common/autotest_common.sh@641 -- # es=1 00:18:35.415 02:38:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:35.415 02:38:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:35.415 02:38:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:35.415 02:38:08 -- target/tls.sh@174 -- # killprocess 138364 00:18:35.415 02:38:08 -- common/autotest_common.sh@936 -- # '[' -z 138364 ']' 00:18:35.415 02:38:08 -- common/autotest_common.sh@940 -- # kill -0 138364 00:18:35.415 02:38:08 -- common/autotest_common.sh@941 -- # uname 00:18:35.416 02:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.416 02:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138364 00:18:35.416 02:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:35.416 02:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:35.416 02:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138364' 00:18:35.416 killing process with pid 138364 00:18:35.416 02:38:08 -- common/autotest_common.sh@955 -- # kill 138364 00:18:35.416 [2024-04-27 02:38:09.001261] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:35.416 02:38:08 -- common/autotest_common.sh@960 -- # wait 138364 00:18:35.678 02:38:09 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:35.678 02:38:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:35.678 02:38:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:35.678 02:38:09 -- common/autotest_common.sh@10 -- # set +x 00:18:35.678 02:38:09 -- nvmf/common.sh@470 -- # nvmfpid=141086 00:18:35.678 02:38:09 -- nvmf/common.sh@471 -- # waitforlisten 141086 00:18:35.678 02:38:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.678 02:38:09 -- common/autotest_common.sh@817 -- # '[' -z 141086 ']' 00:18:35.678 02:38:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.678 02:38:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.678 02:38:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.678 02:38:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.678 02:38:09 -- common/autotest_common.sh@10 -- # set +x 00:18:35.678 [2024-04-27 02:38:09.207796] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:35.678 [2024-04-27 02:38:09.207863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.678 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.678 [2024-04-27 02:38:09.272753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.939 [2024-04-27 02:38:09.334044] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.939 [2024-04-27 02:38:09.334081] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.939 [2024-04-27 02:38:09.334088] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.939 [2024-04-27 02:38:09.334100] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.939 [2024-04-27 02:38:09.334106] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.939 [2024-04-27 02:38:09.334124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.512 02:38:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:36.512 02:38:09 -- common/autotest_common.sh@850 -- # return 0 00:18:36.512 02:38:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:36.512 02:38:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:36.512 02:38:09 -- common/autotest_common.sh@10 -- # set +x 00:18:36.512 02:38:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.512 02:38:10 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.86E7sGhww9 00:18:36.512 02:38:10 -- common/autotest_common.sh@638 -- # local es=0 00:18:36.512 02:38:10 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.86E7sGhww9 00:18:36.512 02:38:10 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:36.512 02:38:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:36.512 02:38:10 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:36.512 02:38:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:36.512 02:38:10 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.86E7sGhww9 00:18:36.512 02:38:10 -- target/tls.sh@49 -- # local key=/tmp/tmp.86E7sGhww9 00:18:36.512 02:38:10 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.773 [2024-04-27 02:38:10.176858] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.773 02:38:10 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.773 02:38:10 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.035 [2024-04-27 02:38:10.465575] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.035 [2024-04-27 02:38:10.465791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.035 02:38:10 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:37.035 malloc0 00:18:37.035 02:38:10 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.296 02:38:10 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:37.557 [2024-04-27 02:38:10.961460] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:37.557 [2024-04-27 02:38:10.961485] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:37.557 [2024-04-27 02:38:10.961508] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:37.557 request: 00:18:37.557 { 00:18:37.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.557 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.557 "psk": "/tmp/tmp.86E7sGhww9", 00:18:37.557 "method": "nvmf_subsystem_add_host", 00:18:37.557 "req_id": 1 00:18:37.557 } 00:18:37.557 Got JSON-RPC error response 00:18:37.557 response: 00:18:37.557 { 00:18:37.557 "code": -32603, 00:18:37.557 "message": "Internal error" 00:18:37.557 } 00:18:37.557 02:38:10 -- common/autotest_common.sh@641 -- # es=1 00:18:37.557 02:38:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:37.557 02:38:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:37.557 02:38:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:37.557 02:38:10 -- target/tls.sh@180 -- # killprocess 141086 00:18:37.557 02:38:10 -- common/autotest_common.sh@936 -- # '[' -z 141086 ']' 00:18:37.557 02:38:10 -- common/autotest_common.sh@940 -- # kill -0 141086 00:18:37.557 02:38:10 -- common/autotest_common.sh@941 -- # uname 00:18:37.557 02:38:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:37.557 02:38:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141086 00:18:37.557 02:38:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:37.557 02:38:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:37.557 02:38:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141086' 00:18:37.557 killing process with pid 141086 00:18:37.557 02:38:11 -- common/autotest_common.sh@955 -- # kill 141086 00:18:37.557 02:38:11 -- common/autotest_common.sh@960 -- # wait 141086 00:18:37.557 02:38:11 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.86E7sGhww9 00:18:37.557 02:38:11 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:37.557 02:38:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:37.557 02:38:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:37.557 02:38:11 -- common/autotest_common.sh@10 -- # set +x 00:18:37.819 02:38:11 -- nvmf/common.sh@470 -- # nvmfpid=141460 00:18:37.819 02:38:11 -- nvmf/common.sh@471 -- # waitforlisten 141460 00:18:37.819 02:38:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.819 02:38:11 -- common/autotest_common.sh@817 -- # '[' -z 141460 ']' 00:18:37.819 02:38:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.819 02:38:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.819 02:38:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.819 02:38:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.819 02:38:11 -- common/autotest_common.sh@10 -- # set +x 00:18:37.819 [2024-04-27 02:38:11.237235] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:37.819 [2024-04-27 02:38:11.237296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.819 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.819 [2024-04-27 02:38:11.302609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.819 [2024-04-27 02:38:11.365667] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.819 [2024-04-27 02:38:11.365702] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.819 [2024-04-27 02:38:11.365709] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.819 [2024-04-27 02:38:11.365716] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.819 [2024-04-27 02:38:11.365722] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.819 [2024-04-27 02:38:11.365739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.391 02:38:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.391 02:38:11 -- common/autotest_common.sh@850 -- # return 0 00:18:38.391 02:38:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:38.391 02:38:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:38.391 02:38:11 -- common/autotest_common.sh@10 -- # set +x 00:18:38.653 02:38:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.653 02:38:12 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.86E7sGhww9 00:18:38.653 02:38:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.86E7sGhww9 00:18:38.653 02:38:12 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:38.653 [2024-04-27 02:38:12.168414] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.653 02:38:12 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:38.914 02:38:12 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.914 [2024-04-27 02:38:12.465142] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.914 [2024-04-27 02:38:12.465355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.914 02:38:12 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:39.175 malloc0 00:18:39.175 02:38:12 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:39.175 02:38:12 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:39.451 [2024-04-27 02:38:12.896903] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:39.452 02:38:12 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.452 02:38:12 -- target/tls.sh@188 -- # bdevperf_pid=141823 00:18:39.452 02:38:12 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.452 02:38:12 -- target/tls.sh@191 -- # waitforlisten 141823 /var/tmp/bdevperf.sock 00:18:39.452 02:38:12 -- common/autotest_common.sh@817 -- # '[' -z 141823 ']' 00:18:39.452 02:38:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.452 02:38:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:39.452 02:38:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.452 02:38:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:39.452 02:38:12 -- common/autotest_common.sh@10 -- # set +x 00:18:39.452 [2024-04-27 02:38:12.941308] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:39.452 [2024-04-27 02:38:12.941357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141823 ] 00:18:39.452 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.452 [2024-04-27 02:38:12.991248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.452 [2024-04-27 02:38:13.041947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.722 02:38:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.722 02:38:13 -- common/autotest_common.sh@850 -- # return 0 00:18:39.722 02:38:13 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:39.722 [2024-04-27 02:38:13.249709] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.722 [2024-04-27 02:38:13.249771] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:39.722 TLSTESTn1 00:18:39.984 02:38:13 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:39.984 02:38:13 -- target/tls.sh@196 -- # tgtconf='{ 00:18:39.984 "subsystems": [ 00:18:39.984 { 00:18:39.984 "subsystem": "keyring", 00:18:39.984 "config": [] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "iobuf", 00:18:39.984 "config": [ 00:18:39.984 { 00:18:39.984 "method": "iobuf_set_options", 00:18:39.984 "params": { 00:18:39.984 "small_pool_count": 8192, 00:18:39.984 "large_pool_count": 1024, 00:18:39.984 "small_bufsize": 8192, 00:18:39.984 "large_bufsize": 135168 00:18:39.984 } 00:18:39.984 } 00:18:39.984 ] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "sock", 00:18:39.984 "config": [ 00:18:39.984 { 00:18:39.984 "method": "sock_impl_set_options", 00:18:39.984 "params": { 00:18:39.984 "impl_name": "posix", 00:18:39.984 "recv_buf_size": 2097152, 00:18:39.984 "send_buf_size": 2097152, 00:18:39.984 "enable_recv_pipe": true, 00:18:39.984 "enable_quickack": false, 00:18:39.984 "enable_placement_id": 0, 00:18:39.984 "enable_zerocopy_send_server": true, 00:18:39.984 "enable_zerocopy_send_client": false, 00:18:39.984 "zerocopy_threshold": 0, 00:18:39.984 "tls_version": 0, 00:18:39.984 "enable_ktls": false 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "sock_impl_set_options", 00:18:39.984 "params": { 00:18:39.984 "impl_name": "ssl", 00:18:39.984 "recv_buf_size": 4096, 00:18:39.984 "send_buf_size": 4096, 00:18:39.984 "enable_recv_pipe": true, 00:18:39.984 "enable_quickack": false, 00:18:39.984 "enable_placement_id": 0, 00:18:39.984 "enable_zerocopy_send_server": true, 00:18:39.984 "enable_zerocopy_send_client": false, 00:18:39.984 "zerocopy_threshold": 0, 00:18:39.984 "tls_version": 0, 00:18:39.984 "enable_ktls": false 00:18:39.984 } 00:18:39.984 } 00:18:39.984 ] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "vmd", 00:18:39.984 "config": [] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "accel", 00:18:39.984 "config": [ 00:18:39.984 { 00:18:39.984 "method": "accel_set_options", 00:18:39.984 "params": { 00:18:39.984 "small_cache_size": 128, 00:18:39.984 "large_cache_size": 16, 00:18:39.984 "task_count": 2048, 00:18:39.984 "sequence_count": 2048, 00:18:39.984 "buf_count": 2048 00:18:39.984 } 00:18:39.984 } 00:18:39.984 ] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "bdev", 00:18:39.984 "config": [ 00:18:39.984 { 00:18:39.984 "method": "bdev_set_options", 00:18:39.984 "params": { 00:18:39.984 "bdev_io_pool_size": 65535, 00:18:39.984 "bdev_io_cache_size": 256, 00:18:39.984 "bdev_auto_examine": true, 00:18:39.984 "iobuf_small_cache_size": 128, 00:18:39.984 "iobuf_large_cache_size": 16 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "bdev_raid_set_options", 00:18:39.984 "params": { 00:18:39.984 "process_window_size_kb": 1024 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "bdev_iscsi_set_options", 00:18:39.984 "params": { 00:18:39.984 "timeout_sec": 30 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "bdev_nvme_set_options", 00:18:39.984 "params": { 00:18:39.984 "action_on_timeout": "none", 00:18:39.984 "timeout_us": 0, 00:18:39.984 "timeout_admin_us": 0, 00:18:39.984 "keep_alive_timeout_ms": 10000, 00:18:39.984 "arbitration_burst": 0, 00:18:39.984 "low_priority_weight": 0, 00:18:39.984 "medium_priority_weight": 0, 00:18:39.984 "high_priority_weight": 0, 00:18:39.984 "nvme_adminq_poll_period_us": 10000, 00:18:39.984 "nvme_ioq_poll_period_us": 0, 00:18:39.984 "io_queue_requests": 0, 00:18:39.984 "delay_cmd_submit": true, 00:18:39.984 "transport_retry_count": 4, 00:18:39.984 "bdev_retry_count": 3, 00:18:39.984 "transport_ack_timeout": 0, 00:18:39.984 "ctrlr_loss_timeout_sec": 0, 00:18:39.984 "reconnect_delay_sec": 0, 00:18:39.984 "fast_io_fail_timeout_sec": 0, 00:18:39.984 "disable_auto_failback": false, 00:18:39.984 "generate_uuids": false, 00:18:39.984 "transport_tos": 0, 00:18:39.984 "nvme_error_stat": false, 00:18:39.984 "rdma_srq_size": 0, 00:18:39.984 "io_path_stat": false, 00:18:39.984 "allow_accel_sequence": false, 00:18:39.984 "rdma_max_cq_size": 0, 00:18:39.984 "rdma_cm_event_timeout_ms": 0, 00:18:39.984 "dhchap_digests": [ 00:18:39.984 "sha256", 00:18:39.984 "sha384", 00:18:39.984 "sha512" 00:18:39.984 ], 00:18:39.984 "dhchap_dhgroups": [ 00:18:39.984 "null", 00:18:39.984 "ffdhe2048", 00:18:39.984 "ffdhe3072", 00:18:39.984 "ffdhe4096", 00:18:39.984 "ffdhe6144", 00:18:39.984 "ffdhe8192" 00:18:39.984 ] 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "bdev_nvme_set_hotplug", 00:18:39.984 "params": { 00:18:39.984 "period_us": 100000, 00:18:39.984 "enable": false 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "bdev_malloc_create", 00:18:39.984 "params": { 00:18:39.984 "name": "malloc0", 00:18:39.984 "num_blocks": 8192, 00:18:39.984 "block_size": 4096, 00:18:39.984 "physical_block_size": 4096, 00:18:39.984 "uuid": "7fb2efbf-e1d9-4082-a0c7-8a05f87a47c3", 00:18:39.984 "optimal_io_boundary": 0 00:18:39.984 } 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "method": "bdev_wait_for_examine" 00:18:39.984 } 00:18:39.984 ] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "nbd", 00:18:39.984 "config": [] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "scheduler", 00:18:39.984 "config": [ 00:18:39.984 { 00:18:39.984 "method": "framework_set_scheduler", 00:18:39.984 "params": { 00:18:39.984 "name": "static" 00:18:39.984 } 00:18:39.984 } 00:18:39.984 ] 00:18:39.984 }, 00:18:39.984 { 00:18:39.984 "subsystem": "nvmf", 00:18:39.984 "config": [ 00:18:39.984 { 00:18:39.984 "method": "nvmf_set_config", 00:18:39.984 "params": { 00:18:39.984 "discovery_filter": "match_any", 00:18:39.984 "admin_cmd_passthru": { 00:18:39.984 "identify_ctrlr": false 00:18:39.985 } 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_set_max_subsystems", 00:18:39.985 "params": { 00:18:39.985 "max_subsystems": 1024 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_set_crdt", 00:18:39.985 "params": { 00:18:39.985 "crdt1": 0, 00:18:39.985 "crdt2": 0, 00:18:39.985 "crdt3": 0 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_create_transport", 00:18:39.985 "params": { 00:18:39.985 "trtype": "TCP", 00:18:39.985 "max_queue_depth": 128, 00:18:39.985 "max_io_qpairs_per_ctrlr": 127, 00:18:39.985 "in_capsule_data_size": 4096, 00:18:39.985 "max_io_size": 131072, 00:18:39.985 "io_unit_size": 131072, 00:18:39.985 "max_aq_depth": 128, 00:18:39.985 "num_shared_buffers": 511, 00:18:39.985 "buf_cache_size": 4294967295, 00:18:39.985 "dif_insert_or_strip": false, 00:18:39.985 "zcopy": false, 00:18:39.985 "c2h_success": false, 00:18:39.985 "sock_priority": 0, 00:18:39.985 "abort_timeout_sec": 1, 00:18:39.985 "ack_timeout": 0, 00:18:39.985 "data_wr_pool_size": 0 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_create_subsystem", 00:18:39.985 "params": { 00:18:39.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.985 "allow_any_host": false, 00:18:39.985 "serial_number": "SPDK00000000000001", 00:18:39.985 "model_number": "SPDK bdev Controller", 00:18:39.985 "max_namespaces": 10, 00:18:39.985 "min_cntlid": 1, 00:18:39.985 "max_cntlid": 65519, 00:18:39.985 "ana_reporting": false 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_subsystem_add_host", 00:18:39.985 "params": { 00:18:39.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.985 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.985 "psk": "/tmp/tmp.86E7sGhww9" 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_subsystem_add_ns", 00:18:39.985 "params": { 00:18:39.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.985 "namespace": { 00:18:39.985 "nsid": 1, 00:18:39.985 "bdev_name": "malloc0", 00:18:39.985 "nguid": "7FB2EFBFE1D94082A0C78A05F87A47C3", 00:18:39.985 "uuid": "7fb2efbf-e1d9-4082-a0c7-8a05f87a47c3", 00:18:39.985 "no_auto_visible": false 00:18:39.985 } 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "method": "nvmf_subsystem_add_listener", 00:18:39.985 "params": { 00:18:39.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.985 "listen_address": { 00:18:39.985 "trtype": "TCP", 00:18:39.985 "adrfam": "IPv4", 00:18:39.985 "traddr": "10.0.0.2", 00:18:39.985 "trsvcid": "4420" 00:18:39.985 }, 00:18:39.985 "secure_channel": true 00:18:39.985 } 00:18:39.985 } 00:18:39.985 ] 00:18:39.985 } 00:18:39.985 ] 00:18:39.985 }' 00:18:39.985 02:38:13 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:40.246 02:38:13 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:40.246 "subsystems": [ 00:18:40.246 { 00:18:40.246 "subsystem": "keyring", 00:18:40.246 "config": [] 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "subsystem": "iobuf", 00:18:40.246 "config": [ 00:18:40.246 { 00:18:40.246 "method": "iobuf_set_options", 00:18:40.246 "params": { 00:18:40.246 "small_pool_count": 8192, 00:18:40.246 "large_pool_count": 1024, 00:18:40.246 "small_bufsize": 8192, 00:18:40.246 "large_bufsize": 135168 00:18:40.246 } 00:18:40.246 } 00:18:40.246 ] 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "subsystem": "sock", 00:18:40.246 "config": [ 00:18:40.246 { 00:18:40.246 "method": "sock_impl_set_options", 00:18:40.246 "params": { 00:18:40.246 "impl_name": "posix", 00:18:40.246 "recv_buf_size": 2097152, 00:18:40.246 "send_buf_size": 2097152, 00:18:40.246 "enable_recv_pipe": true, 00:18:40.246 "enable_quickack": false, 00:18:40.246 "enable_placement_id": 0, 00:18:40.246 "enable_zerocopy_send_server": true, 00:18:40.246 "enable_zerocopy_send_client": false, 00:18:40.246 "zerocopy_threshold": 0, 00:18:40.246 "tls_version": 0, 00:18:40.246 "enable_ktls": false 00:18:40.246 } 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "method": "sock_impl_set_options", 00:18:40.246 "params": { 00:18:40.246 "impl_name": "ssl", 00:18:40.246 "recv_buf_size": 4096, 00:18:40.246 "send_buf_size": 4096, 00:18:40.246 "enable_recv_pipe": true, 00:18:40.246 "enable_quickack": false, 00:18:40.246 "enable_placement_id": 0, 00:18:40.246 "enable_zerocopy_send_server": true, 00:18:40.246 "enable_zerocopy_send_client": false, 00:18:40.246 "zerocopy_threshold": 0, 00:18:40.246 "tls_version": 0, 00:18:40.246 "enable_ktls": false 00:18:40.246 } 00:18:40.246 } 00:18:40.246 ] 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "subsystem": "vmd", 00:18:40.246 "config": [] 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "subsystem": "accel", 00:18:40.246 "config": [ 00:18:40.246 { 00:18:40.246 "method": "accel_set_options", 00:18:40.246 "params": { 00:18:40.246 "small_cache_size": 128, 00:18:40.246 "large_cache_size": 16, 00:18:40.246 "task_count": 2048, 00:18:40.246 "sequence_count": 2048, 00:18:40.246 "buf_count": 2048 00:18:40.246 } 00:18:40.246 } 00:18:40.246 ] 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "subsystem": "bdev", 00:18:40.246 "config": [ 00:18:40.246 { 00:18:40.246 "method": "bdev_set_options", 00:18:40.246 "params": { 00:18:40.246 "bdev_io_pool_size": 65535, 00:18:40.246 "bdev_io_cache_size": 256, 00:18:40.246 "bdev_auto_examine": true, 00:18:40.246 "iobuf_small_cache_size": 128, 00:18:40.246 "iobuf_large_cache_size": 16 00:18:40.246 } 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "method": "bdev_raid_set_options", 00:18:40.246 "params": { 00:18:40.246 "process_window_size_kb": 1024 00:18:40.246 } 00:18:40.246 }, 00:18:40.246 { 00:18:40.246 "method": "bdev_iscsi_set_options", 00:18:40.247 "params": { 00:18:40.247 "timeout_sec": 30 00:18:40.247 } 00:18:40.247 }, 00:18:40.247 { 00:18:40.247 "method": "bdev_nvme_set_options", 00:18:40.247 "params": { 00:18:40.247 "action_on_timeout": "none", 00:18:40.247 "timeout_us": 0, 00:18:40.247 "timeout_admin_us": 0, 00:18:40.247 "keep_alive_timeout_ms": 10000, 00:18:40.247 "arbitration_burst": 0, 00:18:40.247 "low_priority_weight": 0, 00:18:40.247 "medium_priority_weight": 0, 00:18:40.247 "high_priority_weight": 0, 00:18:40.247 "nvme_adminq_poll_period_us": 10000, 00:18:40.247 "nvme_ioq_poll_period_us": 0, 00:18:40.247 "io_queue_requests": 512, 00:18:40.247 "delay_cmd_submit": true, 00:18:40.247 "transport_retry_count": 4, 00:18:40.247 "bdev_retry_count": 3, 00:18:40.247 "transport_ack_timeout": 0, 00:18:40.247 "ctrlr_loss_timeout_sec": 0, 00:18:40.247 "reconnect_delay_sec": 0, 00:18:40.247 "fast_io_fail_timeout_sec": 0, 00:18:40.247 "disable_auto_failback": false, 00:18:40.247 "generate_uuids": false, 00:18:40.247 "transport_tos": 0, 00:18:40.247 "nvme_error_stat": false, 00:18:40.247 "rdma_srq_size": 0, 00:18:40.247 "io_path_stat": false, 00:18:40.247 "allow_accel_sequence": false, 00:18:40.247 "rdma_max_cq_size": 0, 00:18:40.247 "rdma_cm_event_timeout_ms": 0, 00:18:40.247 "dhchap_digests": [ 00:18:40.247 "sha256", 00:18:40.247 "sha384", 00:18:40.247 "sha512" 00:18:40.247 ], 00:18:40.247 "dhchap_dhgroups": [ 00:18:40.247 "null", 00:18:40.247 "ffdhe2048", 00:18:40.247 "ffdhe3072", 00:18:40.247 "ffdhe4096", 00:18:40.247 "ffdhe6144", 00:18:40.247 "ffdhe8192" 00:18:40.247 ] 00:18:40.247 } 00:18:40.247 }, 00:18:40.247 { 00:18:40.247 "method": "bdev_nvme_attach_controller", 00:18:40.247 "params": { 00:18:40.247 "name": "TLSTEST", 00:18:40.247 "trtype": "TCP", 00:18:40.247 "adrfam": "IPv4", 00:18:40.247 "traddr": "10.0.0.2", 00:18:40.247 "trsvcid": "4420", 00:18:40.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.247 "prchk_reftag": false, 00:18:40.247 "prchk_guard": false, 00:18:40.247 "ctrlr_loss_timeout_sec": 0, 00:18:40.247 "reconnect_delay_sec": 0, 00:18:40.247 "fast_io_fail_timeout_sec": 0, 00:18:40.247 "psk": "/tmp/tmp.86E7sGhww9", 00:18:40.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.247 "hdgst": false, 00:18:40.247 "ddgst": false 00:18:40.247 } 00:18:40.247 }, 00:18:40.247 { 00:18:40.247 "method": "bdev_nvme_set_hotplug", 00:18:40.247 "params": { 00:18:40.247 "period_us": 100000, 00:18:40.247 "enable": false 00:18:40.247 } 00:18:40.247 }, 00:18:40.247 { 00:18:40.247 "method": "bdev_wait_for_examine" 00:18:40.247 } 00:18:40.247 ] 00:18:40.247 }, 00:18:40.247 { 00:18:40.247 "subsystem": "nbd", 00:18:40.247 "config": [] 00:18:40.247 } 00:18:40.247 ] 00:18:40.247 }' 00:18:40.247 02:38:13 -- target/tls.sh@199 -- # killprocess 141823 00:18:40.247 02:38:13 -- common/autotest_common.sh@936 -- # '[' -z 141823 ']' 00:18:40.247 02:38:13 -- common/autotest_common.sh@940 -- # kill -0 141823 00:18:40.247 02:38:13 -- common/autotest_common.sh@941 -- # uname 00:18:40.247 02:38:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.247 02:38:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141823 00:18:40.247 02:38:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:40.508 02:38:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:40.508 02:38:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141823' 00:18:40.508 killing process with pid 141823 00:18:40.508 02:38:13 -- common/autotest_common.sh@955 -- # kill 141823 00:18:40.508 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.508 00:18:40.508 Latency(us) 00:18:40.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.508 =================================================================================================================== 00:18:40.508 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:40.508 [2024-04-27 02:38:13.867161] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:40.508 02:38:13 -- common/autotest_common.sh@960 -- # wait 141823 00:18:40.508 02:38:13 -- target/tls.sh@200 -- # killprocess 141460 00:18:40.508 02:38:13 -- common/autotest_common.sh@936 -- # '[' -z 141460 ']' 00:18:40.508 02:38:13 -- common/autotest_common.sh@940 -- # kill -0 141460 00:18:40.508 02:38:13 -- common/autotest_common.sh@941 -- # uname 00:18:40.508 02:38:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.508 02:38:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141460 00:18:40.508 02:38:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:40.508 02:38:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:40.508 02:38:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141460' 00:18:40.508 killing process with pid 141460 00:18:40.508 02:38:14 -- common/autotest_common.sh@955 -- # kill 141460 00:18:40.508 [2024-04-27 02:38:14.035594] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:40.508 02:38:14 -- common/autotest_common.sh@960 -- # wait 141460 00:18:40.770 02:38:14 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:40.770 02:38:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.770 02:38:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:40.770 02:38:14 -- common/autotest_common.sh@10 -- # set +x 00:18:40.770 02:38:14 -- target/tls.sh@203 -- # echo '{ 00:18:40.770 "subsystems": [ 00:18:40.770 { 00:18:40.770 "subsystem": "keyring", 00:18:40.770 "config": [] 00:18:40.770 }, 00:18:40.770 { 00:18:40.770 "subsystem": "iobuf", 00:18:40.770 "config": [ 00:18:40.770 { 00:18:40.770 "method": "iobuf_set_options", 00:18:40.770 "params": { 00:18:40.770 "small_pool_count": 8192, 00:18:40.770 "large_pool_count": 1024, 00:18:40.770 "small_bufsize": 8192, 00:18:40.770 "large_bufsize": 135168 00:18:40.770 } 00:18:40.770 } 00:18:40.770 ] 00:18:40.770 }, 00:18:40.770 { 00:18:40.770 "subsystem": "sock", 00:18:40.770 "config": [ 00:18:40.770 { 00:18:40.770 "method": "sock_impl_set_options", 00:18:40.770 "params": { 00:18:40.770 "impl_name": "posix", 00:18:40.770 "recv_buf_size": 2097152, 00:18:40.770 "send_buf_size": 2097152, 00:18:40.770 "enable_recv_pipe": true, 00:18:40.770 "enable_quickack": false, 00:18:40.770 "enable_placement_id": 0, 00:18:40.770 "enable_zerocopy_send_server": true, 00:18:40.770 "enable_zerocopy_send_client": false, 00:18:40.770 "zerocopy_threshold": 0, 00:18:40.770 "tls_version": 0, 00:18:40.770 "enable_ktls": false 00:18:40.770 } 00:18:40.770 }, 00:18:40.770 { 00:18:40.770 "method": "sock_impl_set_options", 00:18:40.770 "params": { 00:18:40.770 "impl_name": "ssl", 00:18:40.770 "recv_buf_size": 4096, 00:18:40.770 "send_buf_size": 4096, 00:18:40.770 "enable_recv_pipe": true, 00:18:40.770 "enable_quickack": false, 00:18:40.771 "enable_placement_id": 0, 00:18:40.771 "enable_zerocopy_send_server": true, 00:18:40.771 "enable_zerocopy_send_client": false, 00:18:40.771 "zerocopy_threshold": 0, 00:18:40.771 "tls_version": 0, 00:18:40.771 "enable_ktls": false 00:18:40.771 } 00:18:40.771 } 00:18:40.771 ] 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "subsystem": "vmd", 00:18:40.771 "config": [] 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "subsystem": "accel", 00:18:40.771 "config": [ 00:18:40.771 { 00:18:40.771 "method": "accel_set_options", 00:18:40.771 "params": { 00:18:40.771 "small_cache_size": 128, 00:18:40.771 "large_cache_size": 16, 00:18:40.771 "task_count": 2048, 00:18:40.771 "sequence_count": 2048, 00:18:40.771 "buf_count": 2048 00:18:40.771 } 00:18:40.771 } 00:18:40.771 ] 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "subsystem": "bdev", 00:18:40.771 "config": [ 00:18:40.771 { 00:18:40.771 "method": "bdev_set_options", 00:18:40.771 "params": { 00:18:40.771 "bdev_io_pool_size": 65535, 00:18:40.771 "bdev_io_cache_size": 256, 00:18:40.771 "bdev_auto_examine": true, 00:18:40.771 "iobuf_small_cache_size": 128, 00:18:40.771 "iobuf_large_cache_size": 16 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "bdev_raid_set_options", 00:18:40.771 "params": { 00:18:40.771 "process_window_size_kb": 1024 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "bdev_iscsi_set_options", 00:18:40.771 "params": { 00:18:40.771 "timeout_sec": 30 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "bdev_nvme_set_options", 00:18:40.771 "params": { 00:18:40.771 "action_on_timeout": "none", 00:18:40.771 "timeout_us": 0, 00:18:40.771 "timeout_admin_us": 0, 00:18:40.771 "keep_alive_timeout_ms": 10000, 00:18:40.771 "arbitration_burst": 0, 00:18:40.771 "low_priority_weight": 0, 00:18:40.771 "medium_priority_weight": 0, 00:18:40.771 "high_priority_weight": 0, 00:18:40.771 "nvme_adminq_poll_period_us": 10000, 00:18:40.771 "nvme_ioq_poll_period_us": 0, 00:18:40.771 "io_queue_requests": 0, 00:18:40.771 "delay_cmd_submit": true, 00:18:40.771 "transport_retry_count": 4, 00:18:40.771 "bdev_retry_count": 3, 00:18:40.771 "transport_ack_timeout": 0, 00:18:40.771 "ctrlr_loss_timeout_sec": 0, 00:18:40.771 "reconnect_delay_sec": 0, 00:18:40.771 "fast_io_fail_timeout_sec": 0, 00:18:40.771 "disable_auto_failback": false, 00:18:40.771 "generate_uuids": false, 00:18:40.771 "transport_tos": 0, 00:18:40.771 "nvme_error_stat": false, 00:18:40.771 "rdma_srq_size": 0, 00:18:40.771 "io_path_stat": false, 00:18:40.771 "allow_accel_sequence": false, 00:18:40.771 "rdma_max_cq_size": 0, 00:18:40.771 "rdma_cm_event_timeout_ms": 0, 00:18:40.771 "dhchap_digests": [ 00:18:40.771 "sha256", 00:18:40.771 "sha384", 00:18:40.771 "sha512" 00:18:40.771 ], 00:18:40.771 "dhchap_dhgroups": [ 00:18:40.771 "null", 00:18:40.771 "ffdhe2048", 00:18:40.771 "ffdhe3072", 00:18:40.771 "ffdhe4096", 00:18:40.771 "ffdhe6144", 00:18:40.771 "ffdhe8192" 00:18:40.771 ] 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "bdev_nvme_set_hotplug", 00:18:40.771 "params": { 00:18:40.771 "period_us": 100000, 00:18:40.771 "enable": false 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "bdev_malloc_create", 00:18:40.771 "params": { 00:18:40.771 "name": "malloc0", 00:18:40.771 "num_blocks": 8192, 00:18:40.771 "block_size": 4096, 00:18:40.771 "physical_block_size": 4096, 00:18:40.771 "uuid": "7fb2efbf-e1d9-4082-a0c7-8a05f87a47c3", 00:18:40.771 "optimal_io_boundary": 0 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "bdev_wait_for_examine" 00:18:40.771 } 00:18:40.771 ] 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "subsystem": "nbd", 00:18:40.771 "config": [] 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "subsystem": "scheduler", 00:18:40.771 "config": [ 00:18:40.771 { 00:18:40.771 "method": "framework_set_scheduler", 00:18:40.771 "params": { 00:18:40.771 "name": "static" 00:18:40.771 } 00:18:40.771 } 00:18:40.771 ] 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "subsystem": "nvmf", 00:18:40.771 "config": [ 00:18:40.771 { 00:18:40.771 "method": "nvmf_set_config", 00:18:40.771 "params": { 00:18:40.771 "discovery_filter": "match_any", 00:18:40.771 "admin_cmd_passthru": { 00:18:40.771 "identify_ctrlr": false 00:18:40.771 } 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_set_max_subsystems", 00:18:40.771 "params": { 00:18:40.771 "max_subsystems": 1024 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_set_crdt", 00:18:40.771 "params": { 00:18:40.771 "crdt1": 0, 00:18:40.771 "crdt2": 0, 00:18:40.771 "crdt3": 0 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_create_transport", 00:18:40.771 "params": { 00:18:40.771 "trtype": "TCP", 00:18:40.771 "max_queue_depth": 128, 00:18:40.771 "max_io_qpairs_per_ctrlr": 127, 00:18:40.771 "in_capsule_data_size": 4096, 00:18:40.771 "max_io_size": 131072, 00:18:40.771 "io_unit_size": 131072, 00:18:40.771 "max_aq_depth": 128, 00:18:40.771 "num_shared_buffers": 511, 00:18:40.771 "buf_cache_size": 4294967295, 00:18:40.771 "dif_insert_or_strip": false, 00:18:40.771 "zcopy": false, 00:18:40.771 "c2h_success": false, 00:18:40.771 "sock_priority": 0, 00:18:40.771 "abort_timeout_sec": 1, 00:18:40.771 "ack_timeout": 0, 00:18:40.771 "data_wr_pool_size": 0 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_create_subsystem", 00:18:40.771 "params": { 00:18:40.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.771 "allow_any_host": false, 00:18:40.771 "serial_number": "SPDK00000000000001", 00:18:40.771 "model_number": "SPDK bdev Controller", 00:18:40.771 "max_namespaces": 10, 00:18:40.771 "min_cntlid": 1, 00:18:40.771 "max_cntlid": 65519, 00:18:40.771 "ana_reporting": false 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_subsystem_add_host", 00:18:40.771 "params": { 00:18:40.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.771 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.771 "psk": "/tmp/tmp.86E7sGhww9" 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_subsystem_add_ns", 00:18:40.771 "params": { 00:18:40.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.771 "namespace": { 00:18:40.771 "nsid": 1, 00:18:40.771 "bdev_name": "malloc0", 00:18:40.771 "nguid": "7FB2EFBFE1D94082A0C78A05F87A47C3", 00:18:40.771 "uuid": "7fb2efbf-e1d9-4082-a0c7-8a05f87a47c3", 00:18:40.771 "no_auto_visible": false 00:18:40.771 } 00:18:40.771 } 00:18:40.771 }, 00:18:40.771 { 00:18:40.771 "method": "nvmf_subsystem_add_listener", 00:18:40.771 "params": { 00:18:40.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.771 "listen_address": { 00:18:40.771 "trtype": "TCP", 00:18:40.771 "adrfam": "IPv4", 00:18:40.771 "traddr": "10.0.0.2", 00:18:40.771 "trsvcid": "4420" 00:18:40.771 }, 00:18:40.771 "secure_channel": true 00:18:40.771 } 00:18:40.771 } 00:18:40.771 ] 00:18:40.771 } 00:18:40.771 ] 00:18:40.771 }' 00:18:40.771 02:38:14 -- nvmf/common.sh@470 -- # nvmfpid=142167 00:18:40.771 02:38:14 -- nvmf/common.sh@471 -- # waitforlisten 142167 00:18:40.771 02:38:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:40.771 02:38:14 -- common/autotest_common.sh@817 -- # '[' -z 142167 ']' 00:18:40.771 02:38:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.771 02:38:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.772 02:38:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.772 02:38:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.772 02:38:14 -- common/autotest_common.sh@10 -- # set +x 00:18:40.772 [2024-04-27 02:38:14.230891] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:40.772 [2024-04-27 02:38:14.230943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.772 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.772 [2024-04-27 02:38:14.295526] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.772 [2024-04-27 02:38:14.357088] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.772 [2024-04-27 02:38:14.357126] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.772 [2024-04-27 02:38:14.357133] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.772 [2024-04-27 02:38:14.357140] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.772 [2024-04-27 02:38:14.357145] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.772 [2024-04-27 02:38:14.357208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.033 [2024-04-27 02:38:14.538811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.033 [2024-04-27 02:38:14.554759] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:41.033 [2024-04-27 02:38:14.570810] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.033 [2024-04-27 02:38:14.578608] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.606 02:38:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.606 02:38:14 -- common/autotest_common.sh@850 -- # return 0 00:18:41.606 02:38:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:41.606 02:38:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:41.606 02:38:14 -- common/autotest_common.sh@10 -- # set +x 00:18:41.606 02:38:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.606 02:38:15 -- target/tls.sh@207 -- # bdevperf_pid=142325 00:18:41.606 02:38:15 -- target/tls.sh@208 -- # waitforlisten 142325 /var/tmp/bdevperf.sock 00:18:41.606 02:38:15 -- common/autotest_common.sh@817 -- # '[' -z 142325 ']' 00:18:41.606 02:38:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.606 02:38:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.606 02:38:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.606 02:38:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.606 02:38:15 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:41.606 02:38:15 -- common/autotest_common.sh@10 -- # set +x 00:18:41.606 02:38:15 -- target/tls.sh@204 -- # echo '{ 00:18:41.606 "subsystems": [ 00:18:41.606 { 00:18:41.606 "subsystem": "keyring", 00:18:41.606 "config": [] 00:18:41.606 }, 00:18:41.606 { 00:18:41.606 "subsystem": "iobuf", 00:18:41.606 "config": [ 00:18:41.606 { 00:18:41.606 "method": "iobuf_set_options", 00:18:41.606 "params": { 00:18:41.606 "small_pool_count": 8192, 00:18:41.606 "large_pool_count": 1024, 00:18:41.606 "small_bufsize": 8192, 00:18:41.606 "large_bufsize": 135168 00:18:41.606 } 00:18:41.606 } 00:18:41.606 ] 00:18:41.606 }, 00:18:41.606 { 00:18:41.606 "subsystem": "sock", 00:18:41.606 "config": [ 00:18:41.606 { 00:18:41.606 "method": "sock_impl_set_options", 00:18:41.606 "params": { 00:18:41.606 "impl_name": "posix", 00:18:41.606 "recv_buf_size": 2097152, 00:18:41.606 "send_buf_size": 2097152, 00:18:41.606 "enable_recv_pipe": true, 00:18:41.606 "enable_quickack": false, 00:18:41.606 "enable_placement_id": 0, 00:18:41.606 "enable_zerocopy_send_server": true, 00:18:41.606 "enable_zerocopy_send_client": false, 00:18:41.606 "zerocopy_threshold": 0, 00:18:41.606 "tls_version": 0, 00:18:41.606 "enable_ktls": false 00:18:41.606 } 00:18:41.606 }, 00:18:41.606 { 00:18:41.606 "method": "sock_impl_set_options", 00:18:41.606 "params": { 00:18:41.606 "impl_name": "ssl", 00:18:41.606 "recv_buf_size": 4096, 00:18:41.607 "send_buf_size": 4096, 00:18:41.607 "enable_recv_pipe": true, 00:18:41.607 "enable_quickack": false, 00:18:41.607 "enable_placement_id": 0, 00:18:41.607 "enable_zerocopy_send_server": true, 00:18:41.607 "enable_zerocopy_send_client": false, 00:18:41.607 "zerocopy_threshold": 0, 00:18:41.607 "tls_version": 0, 00:18:41.607 "enable_ktls": false 00:18:41.607 } 00:18:41.607 } 00:18:41.607 ] 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "subsystem": "vmd", 00:18:41.607 "config": [] 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "subsystem": "accel", 00:18:41.607 "config": [ 00:18:41.607 { 00:18:41.607 "method": "accel_set_options", 00:18:41.607 "params": { 00:18:41.607 "small_cache_size": 128, 00:18:41.607 "large_cache_size": 16, 00:18:41.607 "task_count": 2048, 00:18:41.607 "sequence_count": 2048, 00:18:41.607 "buf_count": 2048 00:18:41.607 } 00:18:41.607 } 00:18:41.607 ] 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "subsystem": "bdev", 00:18:41.607 "config": [ 00:18:41.607 { 00:18:41.607 "method": "bdev_set_options", 00:18:41.607 "params": { 00:18:41.607 "bdev_io_pool_size": 65535, 00:18:41.607 "bdev_io_cache_size": 256, 00:18:41.607 "bdev_auto_examine": true, 00:18:41.607 "iobuf_small_cache_size": 128, 00:18:41.607 "iobuf_large_cache_size": 16 00:18:41.607 } 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "method": "bdev_raid_set_options", 00:18:41.607 "params": { 00:18:41.607 "process_window_size_kb": 1024 00:18:41.607 } 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "method": "bdev_iscsi_set_options", 00:18:41.607 "params": { 00:18:41.607 "timeout_sec": 30 00:18:41.607 } 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "method": "bdev_nvme_set_options", 00:18:41.607 "params": { 00:18:41.607 "action_on_timeout": "none", 00:18:41.607 "timeout_us": 0, 00:18:41.607 "timeout_admin_us": 0, 00:18:41.607 "keep_alive_timeout_ms": 10000, 00:18:41.607 "arbitration_burst": 0, 00:18:41.607 "low_priority_weight": 0, 00:18:41.607 "medium_priority_weight": 0, 00:18:41.607 "high_priority_weight": 0, 00:18:41.607 "nvme_adminq_poll_period_us": 10000, 00:18:41.607 "nvme_ioq_poll_period_us": 0, 00:18:41.607 "io_queue_requests": 512, 00:18:41.607 "delay_cmd_submit": true, 00:18:41.607 "transport_retry_count": 4, 00:18:41.607 "bdev_retry_count": 3, 00:18:41.607 "transport_ack_timeout": 0, 00:18:41.607 "ctrlr_loss_timeout_sec": 0, 00:18:41.607 "reconnect_delay_sec": 0, 00:18:41.607 "fast_io_fail_timeout_sec": 0, 00:18:41.607 "disable_auto_failback": false, 00:18:41.607 "generate_uuids": false, 00:18:41.607 "transport_tos": 0, 00:18:41.607 "nvme_error_stat": false, 00:18:41.607 "rdma_srq_size": 0, 00:18:41.607 "io_path_stat": false, 00:18:41.607 "allow_accel_sequence": false, 00:18:41.607 "rdma_max_cq_size": 0, 00:18:41.607 "rdma_cm_event_timeout_ms": 0, 00:18:41.607 "dhchap_digests": [ 00:18:41.607 "sha256", 00:18:41.607 "sha384", 00:18:41.607 "sha512" 00:18:41.607 ], 00:18:41.607 "dhchap_dhgroups": [ 00:18:41.607 "null", 00:18:41.607 "ffdhe2048", 00:18:41.607 "ffdhe3072", 00:18:41.607 "ffdhe4096", 00:18:41.607 "ffdhe6144", 00:18:41.607 "ffdhe8192" 00:18:41.607 ] 00:18:41.607 } 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "method": "bdev_nvme_attach_controller", 00:18:41.607 "params": { 00:18:41.607 "name": "TLSTEST", 00:18:41.607 "trtype": "TCP", 00:18:41.607 "adrfam": "IPv4", 00:18:41.607 "traddr": "10.0.0.2", 00:18:41.607 "trsvcid": "4420", 00:18:41.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.607 "prchk_reftag": false, 00:18:41.607 "prchk_guard": false, 00:18:41.607 "ctrlr_loss_timeout_sec": 0, 00:18:41.607 "reconnect_delay_sec": 0, 00:18:41.607 "fast_io_fail_timeout_sec": 0, 00:18:41.607 "psk": "/tmp/tmp.86E7sGhww9", 00:18:41.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.607 "hdgst": false, 00:18:41.607 "ddgst": false 00:18:41.607 } 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "method": "bdev_nvme_set_hotplug", 00:18:41.607 "params": { 00:18:41.607 "period_us": 100000, 00:18:41.607 "enable": false 00:18:41.607 } 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "method": "bdev_wait_for_examine" 00:18:41.607 } 00:18:41.607 ] 00:18:41.607 }, 00:18:41.607 { 00:18:41.607 "subsystem": "nbd", 00:18:41.607 "config": [] 00:18:41.607 } 00:18:41.607 ] 00:18:41.607 }' 00:18:41.607 [2024-04-27 02:38:15.075861] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:41.607 [2024-04-27 02:38:15.075915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142325 ] 00:18:41.607 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.607 [2024-04-27 02:38:15.126042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.607 [2024-04-27 02:38:15.177106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.869 [2024-04-27 02:38:15.293979] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.869 [2024-04-27 02:38:15.294040] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:42.442 02:38:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.442 02:38:15 -- common/autotest_common.sh@850 -- # return 0 00:18:42.442 02:38:15 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:42.442 Running I/O for 10 seconds... 00:18:52.447 00:18:52.447 Latency(us) 00:18:52.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.448 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:52.448 Verification LBA range: start 0x0 length 0x2000 00:18:52.448 TLSTESTn1 : 10.08 1788.97 6.99 0.00 0.00 71290.42 6144.00 144179.20 00:18:52.448 =================================================================================================================== 00:18:52.448 Total : 1788.97 6.99 0.00 0.00 71290.42 6144.00 144179.20 00:18:52.448 0 00:18:52.448 02:38:26 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.448 02:38:26 -- target/tls.sh@214 -- # killprocess 142325 00:18:52.448 02:38:26 -- common/autotest_common.sh@936 -- # '[' -z 142325 ']' 00:18:52.448 02:38:26 -- common/autotest_common.sh@940 -- # kill -0 142325 00:18:52.448 02:38:26 -- common/autotest_common.sh@941 -- # uname 00:18:52.448 02:38:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.448 02:38:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142325 00:18:52.709 02:38:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:52.709 02:38:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:52.709 02:38:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142325' 00:18:52.709 killing process with pid 142325 00:18:52.709 02:38:26 -- common/autotest_common.sh@955 -- # kill 142325 00:18:52.709 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.709 00:18:52.709 Latency(us) 00:18:52.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.709 =================================================================================================================== 00:18:52.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.709 [2024-04-27 02:38:26.088967] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:52.709 02:38:26 -- common/autotest_common.sh@960 -- # wait 142325 00:18:52.709 02:38:26 -- target/tls.sh@215 -- # killprocess 142167 00:18:52.709 02:38:26 -- common/autotest_common.sh@936 -- # '[' -z 142167 ']' 00:18:52.709 02:38:26 -- common/autotest_common.sh@940 -- # kill -0 142167 00:18:52.709 02:38:26 -- common/autotest_common.sh@941 -- # uname 00:18:52.709 02:38:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.709 02:38:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142167 00:18:52.709 02:38:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:52.709 02:38:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:52.709 02:38:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142167' 00:18:52.709 killing process with pid 142167 00:18:52.709 02:38:26 -- common/autotest_common.sh@955 -- # kill 142167 00:18:52.709 [2024-04-27 02:38:26.255645] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:52.709 02:38:26 -- common/autotest_common.sh@960 -- # wait 142167 00:18:52.971 02:38:26 -- target/tls.sh@218 -- # nvmfappstart 00:18:52.971 02:38:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:52.971 02:38:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:52.971 02:38:26 -- common/autotest_common.sh@10 -- # set +x 00:18:52.971 02:38:26 -- nvmf/common.sh@470 -- # nvmfpid=144539 00:18:52.971 02:38:26 -- nvmf/common.sh@471 -- # waitforlisten 144539 00:18:52.971 02:38:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:52.971 02:38:26 -- common/autotest_common.sh@817 -- # '[' -z 144539 ']' 00:18:52.971 02:38:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.971 02:38:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.971 02:38:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.971 02:38:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.971 02:38:26 -- common/autotest_common.sh@10 -- # set +x 00:18:52.971 [2024-04-27 02:38:26.455197] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:52.971 [2024-04-27 02:38:26.455249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.971 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.971 [2024-04-27 02:38:26.519042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.971 [2024-04-27 02:38:26.580522] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.971 [2024-04-27 02:38:26.580561] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.971 [2024-04-27 02:38:26.580568] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.971 [2024-04-27 02:38:26.580579] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.971 [2024-04-27 02:38:26.580585] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.971 [2024-04-27 02:38:26.580612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.927 02:38:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.927 02:38:27 -- common/autotest_common.sh@850 -- # return 0 00:18:53.927 02:38:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:53.927 02:38:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:53.927 02:38:27 -- common/autotest_common.sh@10 -- # set +x 00:18:53.927 02:38:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.928 02:38:27 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.86E7sGhww9 00:18:53.928 02:38:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.86E7sGhww9 00:18:53.928 02:38:27 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.928 [2024-04-27 02:38:27.391440] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.928 02:38:27 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:54.188 02:38:27 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:54.188 [2024-04-27 02:38:27.700221] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.188 [2024-04-27 02:38:27.700444] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.188 02:38:27 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:54.466 malloc0 00:18:54.466 02:38:27 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.466 02:38:28 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.86E7sGhww9 00:18:54.756 [2024-04-27 02:38:28.164261] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:54.756 02:38:28 -- target/tls.sh@222 -- # bdevperf_pid=144901 00:18:54.756 02:38:28 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.756 02:38:28 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:54.756 02:38:28 -- target/tls.sh@225 -- # waitforlisten 144901 /var/tmp/bdevperf.sock 00:18:54.756 02:38:28 -- common/autotest_common.sh@817 -- # '[' -z 144901 ']' 00:18:54.756 02:38:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.756 02:38:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.756 02:38:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.756 02:38:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.756 02:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:54.756 [2024-04-27 02:38:28.224805] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:54.756 [2024-04-27 02:38:28.224859] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144901 ] 00:18:54.756 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.756 [2024-04-27 02:38:28.282360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.756 [2024-04-27 02:38:28.344731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.699 02:38:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.699 02:38:28 -- common/autotest_common.sh@850 -- # return 0 00:18:55.699 02:38:28 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.86E7sGhww9 00:18:55.699 02:38:29 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:55.699 [2024-04-27 02:38:29.279234] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.960 nvme0n1 00:18:55.960 02:38:29 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.960 Running I/O for 1 seconds... 00:18:57.344 00:18:57.344 Latency(us) 00:18:57.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.344 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.344 Verification LBA range: start 0x0 length 0x2000 00:18:57.344 nvme0n1 : 1.07 1360.66 5.32 0.00 0.00 91313.04 8901.97 138062.51 00:18:57.344 =================================================================================================================== 00:18:57.344 Total : 1360.66 5.32 0.00 0.00 91313.04 8901.97 138062.51 00:18:57.344 0 00:18:57.344 02:38:30 -- target/tls.sh@234 -- # killprocess 144901 00:18:57.344 02:38:30 -- common/autotest_common.sh@936 -- # '[' -z 144901 ']' 00:18:57.344 02:38:30 -- common/autotest_common.sh@940 -- # kill -0 144901 00:18:57.344 02:38:30 -- common/autotest_common.sh@941 -- # uname 00:18:57.344 02:38:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.344 02:38:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144901 00:18:57.344 02:38:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:57.344 02:38:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:57.344 02:38:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144901' 00:18:57.344 killing process with pid 144901 00:18:57.344 02:38:30 -- common/autotest_common.sh@955 -- # kill 144901 00:18:57.344 Received shutdown signal, test time was about 1.000000 seconds 00:18:57.344 00:18:57.344 Latency(us) 00:18:57.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.344 =================================================================================================================== 00:18:57.344 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.344 02:38:30 -- common/autotest_common.sh@960 -- # wait 144901 00:18:57.344 02:38:30 -- target/tls.sh@235 -- # killprocess 144539 00:18:57.344 02:38:30 -- common/autotest_common.sh@936 -- # '[' -z 144539 ']' 00:18:57.344 02:38:30 -- common/autotest_common.sh@940 -- # kill -0 144539 00:18:57.344 02:38:30 -- common/autotest_common.sh@941 -- # uname 00:18:57.344 02:38:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.344 02:38:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144539 00:18:57.344 02:38:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:57.344 02:38:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:57.345 02:38:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144539' 00:18:57.345 killing process with pid 144539 00:18:57.345 02:38:30 -- common/autotest_common.sh@955 -- # kill 144539 00:18:57.345 [2024-04-27 02:38:30.807677] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:57.345 02:38:30 -- common/autotest_common.sh@960 -- # wait 144539 00:18:57.345 02:38:30 -- target/tls.sh@238 -- # nvmfappstart 00:18:57.345 02:38:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:57.345 02:38:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:57.345 02:38:30 -- common/autotest_common.sh@10 -- # set +x 00:18:57.345 02:38:30 -- nvmf/common.sh@470 -- # nvmfpid=145555 00:18:57.345 02:38:30 -- nvmf/common.sh@471 -- # waitforlisten 145555 00:18:57.345 02:38:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.345 02:38:30 -- common/autotest_common.sh@817 -- # '[' -z 145555 ']' 00:18:57.345 02:38:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.345 02:38:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:57.345 02:38:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.345 02:38:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:57.345 02:38:30 -- common/autotest_common.sh@10 -- # set +x 00:18:57.605 [2024-04-27 02:38:31.014041] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:57.605 [2024-04-27 02:38:31.014111] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.605 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.605 [2024-04-27 02:38:31.080754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.605 [2024-04-27 02:38:31.143497] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.605 [2024-04-27 02:38:31.143532] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.605 [2024-04-27 02:38:31.143539] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.605 [2024-04-27 02:38:31.143545] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.605 [2024-04-27 02:38:31.143551] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.605 [2024-04-27 02:38:31.143575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.178 02:38:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:58.178 02:38:31 -- common/autotest_common.sh@850 -- # return 0 00:18:58.178 02:38:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:58.178 02:38:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:58.178 02:38:31 -- common/autotest_common.sh@10 -- # set +x 00:18:58.440 02:38:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.440 02:38:31 -- target/tls.sh@239 -- # rpc_cmd 00:18:58.440 02:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.440 02:38:31 -- common/autotest_common.sh@10 -- # set +x 00:18:58.440 [2024-04-27 02:38:31.810402] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.440 malloc0 00:18:58.440 [2024-04-27 02:38:31.837033] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.440 [2024-04-27 02:38:31.837228] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.440 02:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.440 02:38:31 -- target/tls.sh@252 -- # bdevperf_pid=145608 00:18:58.440 02:38:31 -- target/tls.sh@254 -- # waitforlisten 145608 /var/tmp/bdevperf.sock 00:18:58.440 02:38:31 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:58.440 02:38:31 -- common/autotest_common.sh@817 -- # '[' -z 145608 ']' 00:18:58.440 02:38:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.440 02:38:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.440 02:38:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.440 02:38:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.440 02:38:31 -- common/autotest_common.sh@10 -- # set +x 00:18:58.440 [2024-04-27 02:38:31.913991] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:58.440 [2024-04-27 02:38:31.914043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145608 ] 00:18:58.440 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.440 [2024-04-27 02:38:31.972347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.440 [2024-04-27 02:38:32.034803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.385 02:38:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:59.385 02:38:32 -- common/autotest_common.sh@850 -- # return 0 00:18:59.385 02:38:32 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.86E7sGhww9 00:18:59.385 02:38:32 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:59.385 [2024-04-27 02:38:32.965376] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.646 nvme0n1 00:18:59.646 02:38:33 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.646 Running I/O for 1 seconds... 00:19:01.030 00:19:01.030 Latency(us) 00:19:01.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.030 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.030 Verification LBA range: start 0x0 length 0x2000 00:19:01.030 nvme0n1 : 1.09 1312.31 5.13 0.00 0.00 94274.20 7973.55 163403.09 00:19:01.030 =================================================================================================================== 00:19:01.030 Total : 1312.31 5.13 0.00 0.00 94274.20 7973.55 163403.09 00:19:01.030 0 00:19:01.030 02:38:34 -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:01.030 02:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.030 02:38:34 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 02:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.030 02:38:34 -- target/tls.sh@263 -- # tgtcfg='{ 00:19:01.030 "subsystems": [ 00:19:01.030 { 00:19:01.030 "subsystem": "keyring", 00:19:01.030 "config": [ 00:19:01.030 { 00:19:01.030 "method": "keyring_file_add_key", 00:19:01.030 "params": { 00:19:01.030 "name": "key0", 00:19:01.030 "path": "/tmp/tmp.86E7sGhww9" 00:19:01.030 } 00:19:01.030 } 00:19:01.030 ] 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "subsystem": "iobuf", 00:19:01.030 "config": [ 00:19:01.030 { 00:19:01.030 "method": "iobuf_set_options", 00:19:01.030 "params": { 00:19:01.030 "small_pool_count": 8192, 00:19:01.030 "large_pool_count": 1024, 00:19:01.030 "small_bufsize": 8192, 00:19:01.030 "large_bufsize": 135168 00:19:01.030 } 00:19:01.030 } 00:19:01.030 ] 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "subsystem": "sock", 00:19:01.030 "config": [ 00:19:01.030 { 00:19:01.030 "method": "sock_impl_set_options", 00:19:01.030 "params": { 00:19:01.030 "impl_name": "posix", 00:19:01.030 "recv_buf_size": 2097152, 00:19:01.030 "send_buf_size": 2097152, 00:19:01.030 "enable_recv_pipe": true, 00:19:01.030 "enable_quickack": false, 00:19:01.030 "enable_placement_id": 0, 00:19:01.030 "enable_zerocopy_send_server": true, 00:19:01.030 "enable_zerocopy_send_client": false, 00:19:01.030 "zerocopy_threshold": 0, 00:19:01.030 "tls_version": 0, 00:19:01.030 "enable_ktls": false 00:19:01.030 } 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "method": "sock_impl_set_options", 00:19:01.030 "params": { 00:19:01.030 "impl_name": "ssl", 00:19:01.030 "recv_buf_size": 4096, 00:19:01.030 "send_buf_size": 4096, 00:19:01.030 "enable_recv_pipe": true, 00:19:01.030 "enable_quickack": false, 00:19:01.030 "enable_placement_id": 0, 00:19:01.030 "enable_zerocopy_send_server": true, 00:19:01.030 "enable_zerocopy_send_client": false, 00:19:01.030 "zerocopy_threshold": 0, 00:19:01.030 "tls_version": 0, 00:19:01.030 "enable_ktls": false 00:19:01.030 } 00:19:01.030 } 00:19:01.030 ] 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "subsystem": "vmd", 00:19:01.030 "config": [] 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "subsystem": "accel", 00:19:01.030 "config": [ 00:19:01.030 { 00:19:01.030 "method": "accel_set_options", 00:19:01.030 "params": { 00:19:01.030 "small_cache_size": 128, 00:19:01.030 "large_cache_size": 16, 00:19:01.030 "task_count": 2048, 00:19:01.030 "sequence_count": 2048, 00:19:01.030 "buf_count": 2048 00:19:01.030 } 00:19:01.030 } 00:19:01.030 ] 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "subsystem": "bdev", 00:19:01.030 "config": [ 00:19:01.030 { 00:19:01.030 "method": "bdev_set_options", 00:19:01.030 "params": { 00:19:01.030 "bdev_io_pool_size": 65535, 00:19:01.030 "bdev_io_cache_size": 256, 00:19:01.030 "bdev_auto_examine": true, 00:19:01.030 "iobuf_small_cache_size": 128, 00:19:01.030 "iobuf_large_cache_size": 16 00:19:01.030 } 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "method": "bdev_raid_set_options", 00:19:01.030 "params": { 00:19:01.030 "process_window_size_kb": 1024 00:19:01.030 } 00:19:01.030 }, 00:19:01.030 { 00:19:01.030 "method": "bdev_iscsi_set_options", 00:19:01.030 "params": { 00:19:01.030 "timeout_sec": 30 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_nvme_set_options", 00:19:01.031 "params": { 00:19:01.031 "action_on_timeout": "none", 00:19:01.031 "timeout_us": 0, 00:19:01.031 "timeout_admin_us": 0, 00:19:01.031 "keep_alive_timeout_ms": 10000, 00:19:01.031 "arbitration_burst": 0, 00:19:01.031 "low_priority_weight": 0, 00:19:01.031 "medium_priority_weight": 0, 00:19:01.031 "high_priority_weight": 0, 00:19:01.031 "nvme_adminq_poll_period_us": 10000, 00:19:01.031 "nvme_ioq_poll_period_us": 0, 00:19:01.031 "io_queue_requests": 0, 00:19:01.031 "delay_cmd_submit": true, 00:19:01.031 "transport_retry_count": 4, 00:19:01.031 "bdev_retry_count": 3, 00:19:01.031 "transport_ack_timeout": 0, 00:19:01.031 "ctrlr_loss_timeout_sec": 0, 00:19:01.031 "reconnect_delay_sec": 0, 00:19:01.031 "fast_io_fail_timeout_sec": 0, 00:19:01.031 "disable_auto_failback": false, 00:19:01.031 "generate_uuids": false, 00:19:01.031 "transport_tos": 0, 00:19:01.031 "nvme_error_stat": false, 00:19:01.031 "rdma_srq_size": 0, 00:19:01.031 "io_path_stat": false, 00:19:01.031 "allow_accel_sequence": false, 00:19:01.031 "rdma_max_cq_size": 0, 00:19:01.031 "rdma_cm_event_timeout_ms": 0, 00:19:01.031 "dhchap_digests": [ 00:19:01.031 "sha256", 00:19:01.031 "sha384", 00:19:01.031 "sha512" 00:19:01.031 ], 00:19:01.031 "dhchap_dhgroups": [ 00:19:01.031 "null", 00:19:01.031 "ffdhe2048", 00:19:01.031 "ffdhe3072", 00:19:01.031 "ffdhe4096", 00:19:01.031 "ffdhe6144", 00:19:01.031 "ffdhe8192" 00:19:01.031 ] 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_nvme_set_hotplug", 00:19:01.031 "params": { 00:19:01.031 "period_us": 100000, 00:19:01.031 "enable": false 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_malloc_create", 00:19:01.031 "params": { 00:19:01.031 "name": "malloc0", 00:19:01.031 "num_blocks": 8192, 00:19:01.031 "block_size": 4096, 00:19:01.031 "physical_block_size": 4096, 00:19:01.031 "uuid": "9e5d1d6d-dead-429a-a2a1-387ec443479c", 00:19:01.031 "optimal_io_boundary": 0 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_wait_for_examine" 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "nbd", 00:19:01.031 "config": [] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "scheduler", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "framework_set_scheduler", 00:19:01.031 "params": { 00:19:01.031 "name": "static" 00:19:01.031 } 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "nvmf", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "nvmf_set_config", 00:19:01.031 "params": { 00:19:01.031 "discovery_filter": "match_any", 00:19:01.031 "admin_cmd_passthru": { 00:19:01.031 "identify_ctrlr": false 00:19:01.031 } 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_set_max_subsystems", 00:19:01.031 "params": { 00:19:01.031 "max_subsystems": 1024 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_set_crdt", 00:19:01.031 "params": { 00:19:01.031 "crdt1": 0, 00:19:01.031 "crdt2": 0, 00:19:01.031 "crdt3": 0 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_create_transport", 00:19:01.031 "params": { 00:19:01.031 "trtype": "TCP", 00:19:01.031 "max_queue_depth": 128, 00:19:01.031 "max_io_qpairs_per_ctrlr": 127, 00:19:01.031 "in_capsule_data_size": 4096, 00:19:01.031 "max_io_size": 131072, 00:19:01.031 "io_unit_size": 131072, 00:19:01.031 "max_aq_depth": 128, 00:19:01.031 "num_shared_buffers": 511, 00:19:01.031 "buf_cache_size": 4294967295, 00:19:01.031 "dif_insert_or_strip": false, 00:19:01.031 "zcopy": false, 00:19:01.031 "c2h_success": false, 00:19:01.031 "sock_priority": 0, 00:19:01.031 "abort_timeout_sec": 1, 00:19:01.031 "ack_timeout": 0, 00:19:01.031 "data_wr_pool_size": 0 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_create_subsystem", 00:19:01.031 "params": { 00:19:01.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.031 "allow_any_host": false, 00:19:01.031 "serial_number": "00000000000000000000", 00:19:01.031 "model_number": "SPDK bdev Controller", 00:19:01.031 "max_namespaces": 32, 00:19:01.031 "min_cntlid": 1, 00:19:01.031 "max_cntlid": 65519, 00:19:01.031 "ana_reporting": false 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_subsystem_add_host", 00:19:01.031 "params": { 00:19:01.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.031 "host": "nqn.2016-06.io.spdk:host1", 00:19:01.031 "psk": "key0" 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_subsystem_add_ns", 00:19:01.031 "params": { 00:19:01.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.031 "namespace": { 00:19:01.031 "nsid": 1, 00:19:01.031 "bdev_name": "malloc0", 00:19:01.031 "nguid": "9E5D1D6DDEAD429AA2A1387EC443479C", 00:19:01.031 "uuid": "9e5d1d6d-dead-429a-a2a1-387ec443479c", 00:19:01.031 "no_auto_visible": false 00:19:01.031 } 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "nvmf_subsystem_add_listener", 00:19:01.031 "params": { 00:19:01.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.031 "listen_address": { 00:19:01.031 "trtype": "TCP", 00:19:01.031 "adrfam": "IPv4", 00:19:01.031 "traddr": "10.0.0.2", 00:19:01.031 "trsvcid": "4420" 00:19:01.031 }, 00:19:01.031 "secure_channel": true 00:19:01.031 } 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }' 00:19:01.031 02:38:34 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:01.031 02:38:34 -- target/tls.sh@264 -- # bperfcfg='{ 00:19:01.031 "subsystems": [ 00:19:01.031 { 00:19:01.031 "subsystem": "keyring", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "keyring_file_add_key", 00:19:01.031 "params": { 00:19:01.031 "name": "key0", 00:19:01.031 "path": "/tmp/tmp.86E7sGhww9" 00:19:01.031 } 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "iobuf", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "iobuf_set_options", 00:19:01.031 "params": { 00:19:01.031 "small_pool_count": 8192, 00:19:01.031 "large_pool_count": 1024, 00:19:01.031 "small_bufsize": 8192, 00:19:01.031 "large_bufsize": 135168 00:19:01.031 } 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "sock", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "sock_impl_set_options", 00:19:01.031 "params": { 00:19:01.031 "impl_name": "posix", 00:19:01.031 "recv_buf_size": 2097152, 00:19:01.031 "send_buf_size": 2097152, 00:19:01.031 "enable_recv_pipe": true, 00:19:01.031 "enable_quickack": false, 00:19:01.031 "enable_placement_id": 0, 00:19:01.031 "enable_zerocopy_send_server": true, 00:19:01.031 "enable_zerocopy_send_client": false, 00:19:01.031 "zerocopy_threshold": 0, 00:19:01.031 "tls_version": 0, 00:19:01.031 "enable_ktls": false 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "sock_impl_set_options", 00:19:01.031 "params": { 00:19:01.031 "impl_name": "ssl", 00:19:01.031 "recv_buf_size": 4096, 00:19:01.031 "send_buf_size": 4096, 00:19:01.031 "enable_recv_pipe": true, 00:19:01.031 "enable_quickack": false, 00:19:01.031 "enable_placement_id": 0, 00:19:01.031 "enable_zerocopy_send_server": true, 00:19:01.031 "enable_zerocopy_send_client": false, 00:19:01.031 "zerocopy_threshold": 0, 00:19:01.031 "tls_version": 0, 00:19:01.031 "enable_ktls": false 00:19:01.031 } 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "vmd", 00:19:01.031 "config": [] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "accel", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "accel_set_options", 00:19:01.031 "params": { 00:19:01.031 "small_cache_size": 128, 00:19:01.031 "large_cache_size": 16, 00:19:01.031 "task_count": 2048, 00:19:01.031 "sequence_count": 2048, 00:19:01.031 "buf_count": 2048 00:19:01.031 } 00:19:01.031 } 00:19:01.031 ] 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "subsystem": "bdev", 00:19:01.031 "config": [ 00:19:01.031 { 00:19:01.031 "method": "bdev_set_options", 00:19:01.031 "params": { 00:19:01.031 "bdev_io_pool_size": 65535, 00:19:01.031 "bdev_io_cache_size": 256, 00:19:01.031 "bdev_auto_examine": true, 00:19:01.031 "iobuf_small_cache_size": 128, 00:19:01.031 "iobuf_large_cache_size": 16 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_raid_set_options", 00:19:01.031 "params": { 00:19:01.031 "process_window_size_kb": 1024 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_iscsi_set_options", 00:19:01.031 "params": { 00:19:01.031 "timeout_sec": 30 00:19:01.031 } 00:19:01.031 }, 00:19:01.031 { 00:19:01.031 "method": "bdev_nvme_set_options", 00:19:01.031 "params": { 00:19:01.031 "action_on_timeout": "none", 00:19:01.031 "timeout_us": 0, 00:19:01.031 "timeout_admin_us": 0, 00:19:01.031 "keep_alive_timeout_ms": 10000, 00:19:01.031 "arbitration_burst": 0, 00:19:01.031 "low_priority_weight": 0, 00:19:01.031 "medium_priority_weight": 0, 00:19:01.031 "high_priority_weight": 0, 00:19:01.031 "nvme_adminq_poll_period_us": 10000, 00:19:01.031 "nvme_ioq_poll_period_us": 0, 00:19:01.031 "io_queue_requests": 512, 00:19:01.031 "delay_cmd_submit": true, 00:19:01.031 "transport_retry_count": 4, 00:19:01.031 "bdev_retry_count": 3, 00:19:01.032 "transport_ack_timeout": 0, 00:19:01.032 "ctrlr_loss_timeout_sec": 0, 00:19:01.032 "reconnect_delay_sec": 0, 00:19:01.032 "fast_io_fail_timeout_sec": 0, 00:19:01.032 "disable_auto_failback": false, 00:19:01.032 "generate_uuids": false, 00:19:01.032 "transport_tos": 0, 00:19:01.032 "nvme_error_stat": false, 00:19:01.032 "rdma_srq_size": 0, 00:19:01.032 "io_path_stat": false, 00:19:01.032 "allow_accel_sequence": false, 00:19:01.032 "rdma_max_cq_size": 0, 00:19:01.032 "rdma_cm_event_timeout_ms": 0, 00:19:01.032 "dhchap_digests": [ 00:19:01.032 "sha256", 00:19:01.032 "sha384", 00:19:01.032 "sha512" 00:19:01.032 ], 00:19:01.032 "dhchap_dhgroups": [ 00:19:01.032 "null", 00:19:01.032 "ffdhe2048", 00:19:01.032 "ffdhe3072", 00:19:01.032 "ffdhe4096", 00:19:01.032 "ffdhe6144", 00:19:01.032 "ffdhe8192" 00:19:01.032 ] 00:19:01.032 } 00:19:01.032 }, 00:19:01.032 { 00:19:01.032 "method": "bdev_nvme_attach_controller", 00:19:01.032 "params": { 00:19:01.032 "name": "nvme0", 00:19:01.032 "trtype": "TCP", 00:19:01.032 "adrfam": "IPv4", 00:19:01.032 "traddr": "10.0.0.2", 00:19:01.032 "trsvcid": "4420", 00:19:01.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.032 "prchk_reftag": false, 00:19:01.032 "prchk_guard": false, 00:19:01.032 "ctrlr_loss_timeout_sec": 0, 00:19:01.032 "reconnect_delay_sec": 0, 00:19:01.032 "fast_io_fail_timeout_sec": 0, 00:19:01.032 "psk": "key0", 00:19:01.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.032 "hdgst": false, 00:19:01.032 "ddgst": false 00:19:01.032 } 00:19:01.032 }, 00:19:01.032 { 00:19:01.032 "method": "bdev_nvme_set_hotplug", 00:19:01.032 "params": { 00:19:01.032 "period_us": 100000, 00:19:01.032 "enable": false 00:19:01.032 } 00:19:01.032 }, 00:19:01.032 { 00:19:01.032 "method": "bdev_enable_histogram", 00:19:01.032 "params": { 00:19:01.032 "name": "nvme0n1", 00:19:01.032 "enable": true 00:19:01.032 } 00:19:01.032 }, 00:19:01.032 { 00:19:01.032 "method": "bdev_wait_for_examine" 00:19:01.032 } 00:19:01.032 ] 00:19:01.032 }, 00:19:01.032 { 00:19:01.032 "subsystem": "nbd", 00:19:01.032 "config": [] 00:19:01.032 } 00:19:01.032 ] 00:19:01.032 }' 00:19:01.032 02:38:34 -- target/tls.sh@266 -- # killprocess 145608 00:19:01.032 02:38:34 -- common/autotest_common.sh@936 -- # '[' -z 145608 ']' 00:19:01.032 02:38:34 -- common/autotest_common.sh@940 -- # kill -0 145608 00:19:01.032 02:38:34 -- common/autotest_common.sh@941 -- # uname 00:19:01.032 02:38:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.032 02:38:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145608 00:19:01.293 02:38:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:01.293 02:38:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:01.293 02:38:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145608' 00:19:01.293 killing process with pid 145608 00:19:01.293 02:38:34 -- common/autotest_common.sh@955 -- # kill 145608 00:19:01.293 Received shutdown signal, test time was about 1.000000 seconds 00:19:01.293 00:19:01.293 Latency(us) 00:19:01.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.293 =================================================================================================================== 00:19:01.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.293 02:38:34 -- common/autotest_common.sh@960 -- # wait 145608 00:19:01.293 02:38:34 -- target/tls.sh@267 -- # killprocess 145555 00:19:01.293 02:38:34 -- common/autotest_common.sh@936 -- # '[' -z 145555 ']' 00:19:01.293 02:38:34 -- common/autotest_common.sh@940 -- # kill -0 145555 00:19:01.293 02:38:34 -- common/autotest_common.sh@941 -- # uname 00:19:01.293 02:38:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.293 02:38:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145555 00:19:01.293 02:38:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:01.293 02:38:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:01.293 02:38:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145555' 00:19:01.293 killing process with pid 145555 00:19:01.293 02:38:34 -- common/autotest_common.sh@955 -- # kill 145555 00:19:01.293 02:38:34 -- common/autotest_common.sh@960 -- # wait 145555 00:19:01.554 02:38:34 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:01.554 02:38:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:01.554 02:38:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:01.554 02:38:34 -- common/autotest_common.sh@10 -- # set +x 00:19:01.554 02:38:34 -- target/tls.sh@269 -- # echo '{ 00:19:01.554 "subsystems": [ 00:19:01.554 { 00:19:01.554 "subsystem": "keyring", 00:19:01.554 "config": [ 00:19:01.554 { 00:19:01.554 "method": "keyring_file_add_key", 00:19:01.554 "params": { 00:19:01.554 "name": "key0", 00:19:01.554 "path": "/tmp/tmp.86E7sGhww9" 00:19:01.554 } 00:19:01.554 } 00:19:01.554 ] 00:19:01.554 }, 00:19:01.554 { 00:19:01.554 "subsystem": "iobuf", 00:19:01.554 "config": [ 00:19:01.554 { 00:19:01.554 "method": "iobuf_set_options", 00:19:01.554 "params": { 00:19:01.554 "small_pool_count": 8192, 00:19:01.554 "large_pool_count": 1024, 00:19:01.554 "small_bufsize": 8192, 00:19:01.554 "large_bufsize": 135168 00:19:01.554 } 00:19:01.554 } 00:19:01.554 ] 00:19:01.554 }, 00:19:01.554 { 00:19:01.554 "subsystem": "sock", 00:19:01.554 "config": [ 00:19:01.554 { 00:19:01.554 "method": "sock_impl_set_options", 00:19:01.554 "params": { 00:19:01.554 "impl_name": "posix", 00:19:01.554 "recv_buf_size": 2097152, 00:19:01.554 "send_buf_size": 2097152, 00:19:01.554 "enable_recv_pipe": true, 00:19:01.554 "enable_quickack": false, 00:19:01.554 "enable_placement_id": 0, 00:19:01.554 "enable_zerocopy_send_server": true, 00:19:01.554 "enable_zerocopy_send_client": false, 00:19:01.554 "zerocopy_threshold": 0, 00:19:01.554 "tls_version": 0, 00:19:01.554 "enable_ktls": false 00:19:01.554 } 00:19:01.554 }, 00:19:01.554 { 00:19:01.554 "method": "sock_impl_set_options", 00:19:01.554 "params": { 00:19:01.554 "impl_name": "ssl", 00:19:01.554 "recv_buf_size": 4096, 00:19:01.554 "send_buf_size": 4096, 00:19:01.554 "enable_recv_pipe": true, 00:19:01.555 "enable_quickack": false, 00:19:01.555 "enable_placement_id": 0, 00:19:01.555 "enable_zerocopy_send_server": true, 00:19:01.555 "enable_zerocopy_send_client": false, 00:19:01.555 "zerocopy_threshold": 0, 00:19:01.555 "tls_version": 0, 00:19:01.555 "enable_ktls": false 00:19:01.555 } 00:19:01.555 } 00:19:01.555 ] 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "subsystem": "vmd", 00:19:01.555 "config": [] 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "subsystem": "accel", 00:19:01.555 "config": [ 00:19:01.555 { 00:19:01.555 "method": "accel_set_options", 00:19:01.555 "params": { 00:19:01.555 "small_cache_size": 128, 00:19:01.555 "large_cache_size": 16, 00:19:01.555 "task_count": 2048, 00:19:01.555 "sequence_count": 2048, 00:19:01.555 "buf_count": 2048 00:19:01.555 } 00:19:01.555 } 00:19:01.555 ] 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "subsystem": "bdev", 00:19:01.555 "config": [ 00:19:01.555 { 00:19:01.555 "method": "bdev_set_options", 00:19:01.555 "params": { 00:19:01.555 "bdev_io_pool_size": 65535, 00:19:01.555 "bdev_io_cache_size": 256, 00:19:01.555 "bdev_auto_examine": true, 00:19:01.555 "iobuf_small_cache_size": 128, 00:19:01.555 "iobuf_large_cache_size": 16 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "bdev_raid_set_options", 00:19:01.555 "params": { 00:19:01.555 "process_window_size_kb": 1024 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "bdev_iscsi_set_options", 00:19:01.555 "params": { 00:19:01.555 "timeout_sec": 30 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "bdev_nvme_set_options", 00:19:01.555 "params": { 00:19:01.555 "action_on_timeout": "none", 00:19:01.555 "timeout_us": 0, 00:19:01.555 "timeout_admin_us": 0, 00:19:01.555 "keep_alive_timeout_ms": 10000, 00:19:01.555 "arbitration_burst": 0, 00:19:01.555 "low_priority_weight": 0, 00:19:01.555 "medium_priority_weight": 0, 00:19:01.555 "high_priority_weight": 0, 00:19:01.555 "nvme_adminq_poll_period_us": 10000, 00:19:01.555 "nvme_ioq_poll_period_us": 0, 00:19:01.555 "io_queue_requests": 0, 00:19:01.555 "delay_cmd_submit": true, 00:19:01.555 "transport_retry_count": 4, 00:19:01.555 "bdev_retry_count": 3, 00:19:01.555 "transport_ack_timeout": 0, 00:19:01.555 "ctrlr_loss_timeout_sec": 0, 00:19:01.555 "reconnect_delay_sec": 0, 00:19:01.555 "fast_io_fail_timeout_sec": 0, 00:19:01.555 "disable_auto_failback": false, 00:19:01.555 "generate_uuids": false, 00:19:01.555 "transport_tos": 0, 00:19:01.555 "nvme_error_stat": false, 00:19:01.555 "rdma_srq_size": 0, 00:19:01.555 "io_path_stat": false, 00:19:01.555 "allow_accel_sequence": false, 00:19:01.555 "rdma_max_cq_size": 0, 00:19:01.555 "rdma_cm_event_timeout_ms": 0, 00:19:01.555 "dhchap_digests": [ 00:19:01.555 "sha256", 00:19:01.555 "sha384", 00:19:01.555 "sha512" 00:19:01.555 ], 00:19:01.555 "dhchap_dhgroups": [ 00:19:01.555 "null", 00:19:01.555 "ffdhe2048", 00:19:01.555 "ffdhe3072", 00:19:01.555 "ffdhe4096", 00:19:01.555 "ffdhe6144", 00:19:01.555 "ffdhe8192" 00:19:01.555 ] 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "bdev_nvme_set_hotplug", 00:19:01.555 "params": { 00:19:01.555 "period_us": 100000, 00:19:01.555 "enable": false 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "bdev_malloc_create", 00:19:01.555 "params": { 00:19:01.555 "name": "malloc0", 00:19:01.555 "num_blocks": 8192, 00:19:01.555 "block_size": 4096, 00:19:01.555 "physical_block_size": 4096, 00:19:01.555 "uuid": "9e5d1d6d-dead-429a-a2a1-387ec443479c", 00:19:01.555 "optimal_io_boundary": 0 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "bdev_wait_for_examine" 00:19:01.555 } 00:19:01.555 ] 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "subsystem": "nbd", 00:19:01.555 "config": [] 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "subsystem": "scheduler", 00:19:01.555 "config": [ 00:19:01.555 { 00:19:01.555 "method": "framework_set_scheduler", 00:19:01.555 "params": { 00:19:01.555 "name": "static" 00:19:01.555 } 00:19:01.555 } 00:19:01.555 ] 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "subsystem": "nvmf", 00:19:01.555 "config": [ 00:19:01.555 { 00:19:01.555 "method": "nvmf_set_config", 00:19:01.555 "params": { 00:19:01.555 "discovery_filter": "match_any", 00:19:01.555 "admin_cmd_passthru": { 00:19:01.555 "identify_ctrlr": false 00:19:01.555 } 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_set_max_subsystems", 00:19:01.555 "params": { 00:19:01.555 "max_subsystems": 1024 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_set_crdt", 00:19:01.555 "params": { 00:19:01.555 "crdt1": 0, 00:19:01.555 "crdt2": 0, 00:19:01.555 "crdt3": 0 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_create_transport", 00:19:01.555 "params": { 00:19:01.555 "trtype": "TCP", 00:19:01.555 "max_queue_depth": 128, 00:19:01.555 "max_io_qpairs_per_ctrlr": 127, 00:19:01.555 "in_capsule_data_size": 4096, 00:19:01.555 "max_io_size": 131072, 00:19:01.555 "io_unit_size": 131072, 00:19:01.555 "max_aq_depth": 128, 00:19:01.555 "num_shared_buffers": 511, 00:19:01.555 "buf_cache_size": 4294967295, 00:19:01.555 "dif_insert_or_strip": false, 00:19:01.555 "zcopy": false, 00:19:01.555 "c2h_success": false, 00:19:01.555 "sock_priority": 0, 00:19:01.555 "abort_timeout_sec": 1, 00:19:01.555 "ack_timeout": 0, 00:19:01.555 "data_wr_pool_size": 0 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_create_subsystem", 00:19:01.555 "params": { 00:19:01.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.555 "allow_any_host": false, 00:19:01.555 "serial_number": "00000000000000000000", 00:19:01.555 "model_number": "SPDK bdev Controller", 00:19:01.555 "max_namespaces": 32, 00:19:01.555 "min_cntlid": 1, 00:19:01.555 "max_cntlid": 65519, 00:19:01.555 "ana_reporting": false 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_subsystem_add_host", 00:19:01.555 "params": { 00:19:01.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.555 "host": "nqn.2016-06.io.spdk:host1", 00:19:01.555 "psk": "key0" 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_subsystem_add_ns", 00:19:01.555 "params": { 00:19:01.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.555 "namespace": { 00:19:01.555 "nsid": 1, 00:19:01.555 "bdev_name": "malloc0", 00:19:01.555 "nguid": "9E5D1D6DDEAD429AA2A1387EC443479C", 00:19:01.555 "uuid": "9e5d1d6d-dead-429a-a2a1-387ec443479c", 00:19:01.555 "no_auto_visible": false 00:19:01.555 } 00:19:01.555 } 00:19:01.555 }, 00:19:01.555 { 00:19:01.555 "method": "nvmf_subsystem_add_listener", 00:19:01.555 "params": { 00:19:01.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.555 "listen_address": { 00:19:01.555 "trtype": "TCP", 00:19:01.555 "adrfam": "IPv4", 00:19:01.555 "traddr": "10.0.0.2", 00:19:01.555 "trsvcid": "4420" 00:19:01.555 }, 00:19:01.555 "secure_channel": true 00:19:01.555 } 00:19:01.555 } 00:19:01.555 ] 00:19:01.555 } 00:19:01.555 ] 00:19:01.555 }' 00:19:01.555 02:38:34 -- nvmf/common.sh@470 -- # nvmfpid=146294 00:19:01.555 02:38:34 -- nvmf/common.sh@471 -- # waitforlisten 146294 00:19:01.555 02:38:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:01.555 02:38:34 -- common/autotest_common.sh@817 -- # '[' -z 146294 ']' 00:19:01.555 02:38:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.555 02:38:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.555 02:38:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.555 02:38:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.555 02:38:34 -- common/autotest_common.sh@10 -- # set +x 00:19:01.555 [2024-04-27 02:38:35.043669] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:01.555 [2024-04-27 02:38:35.043724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.555 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.555 [2024-04-27 02:38:35.108073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.555 [2024-04-27 02:38:35.171389] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.555 [2024-04-27 02:38:35.171425] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.555 [2024-04-27 02:38:35.171433] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.555 [2024-04-27 02:38:35.171439] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.555 [2024-04-27 02:38:35.171445] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.555 [2024-04-27 02:38:35.171500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.817 [2024-04-27 02:38:35.360633] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.817 [2024-04-27 02:38:35.392643] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.817 [2024-04-27 02:38:35.401605] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.390 02:38:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:02.390 02:38:35 -- common/autotest_common.sh@850 -- # return 0 00:19:02.390 02:38:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:02.390 02:38:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:02.390 02:38:35 -- common/autotest_common.sh@10 -- # set +x 00:19:02.390 02:38:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.390 02:38:35 -- target/tls.sh@272 -- # bdevperf_pid=146490 00:19:02.390 02:38:35 -- target/tls.sh@273 -- # waitforlisten 146490 /var/tmp/bdevperf.sock 00:19:02.390 02:38:35 -- common/autotest_common.sh@817 -- # '[' -z 146490 ']' 00:19:02.390 02:38:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.390 02:38:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:02.390 02:38:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.390 02:38:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:02.390 02:38:35 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:02.390 02:38:35 -- common/autotest_common.sh@10 -- # set +x 00:19:02.390 02:38:35 -- target/tls.sh@270 -- # echo '{ 00:19:02.390 "subsystems": [ 00:19:02.390 { 00:19:02.390 "subsystem": "keyring", 00:19:02.390 "config": [ 00:19:02.390 { 00:19:02.390 "method": "keyring_file_add_key", 00:19:02.390 "params": { 00:19:02.390 "name": "key0", 00:19:02.390 "path": "/tmp/tmp.86E7sGhww9" 00:19:02.390 } 00:19:02.390 } 00:19:02.390 ] 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "subsystem": "iobuf", 00:19:02.390 "config": [ 00:19:02.390 { 00:19:02.390 "method": "iobuf_set_options", 00:19:02.390 "params": { 00:19:02.390 "small_pool_count": 8192, 00:19:02.390 "large_pool_count": 1024, 00:19:02.390 "small_bufsize": 8192, 00:19:02.390 "large_bufsize": 135168 00:19:02.390 } 00:19:02.390 } 00:19:02.390 ] 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "subsystem": "sock", 00:19:02.390 "config": [ 00:19:02.390 { 00:19:02.390 "method": "sock_impl_set_options", 00:19:02.390 "params": { 00:19:02.390 "impl_name": "posix", 00:19:02.390 "recv_buf_size": 2097152, 00:19:02.390 "send_buf_size": 2097152, 00:19:02.390 "enable_recv_pipe": true, 00:19:02.390 "enable_quickack": false, 00:19:02.390 "enable_placement_id": 0, 00:19:02.390 "enable_zerocopy_send_server": true, 00:19:02.390 "enable_zerocopy_send_client": false, 00:19:02.390 "zerocopy_threshold": 0, 00:19:02.390 "tls_version": 0, 00:19:02.390 "enable_ktls": false 00:19:02.390 } 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "method": "sock_impl_set_options", 00:19:02.390 "params": { 00:19:02.390 "impl_name": "ssl", 00:19:02.390 "recv_buf_size": 4096, 00:19:02.390 "send_buf_size": 4096, 00:19:02.390 "enable_recv_pipe": true, 00:19:02.390 "enable_quickack": false, 00:19:02.390 "enable_placement_id": 0, 00:19:02.390 "enable_zerocopy_send_server": true, 00:19:02.390 "enable_zerocopy_send_client": false, 00:19:02.390 "zerocopy_threshold": 0, 00:19:02.390 "tls_version": 0, 00:19:02.390 "enable_ktls": false 00:19:02.390 } 00:19:02.390 } 00:19:02.390 ] 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "subsystem": "vmd", 00:19:02.390 "config": [] 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "subsystem": "accel", 00:19:02.390 "config": [ 00:19:02.390 { 00:19:02.390 "method": "accel_set_options", 00:19:02.390 "params": { 00:19:02.390 "small_cache_size": 128, 00:19:02.390 "large_cache_size": 16, 00:19:02.390 "task_count": 2048, 00:19:02.390 "sequence_count": 2048, 00:19:02.390 "buf_count": 2048 00:19:02.390 } 00:19:02.390 } 00:19:02.390 ] 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "subsystem": "bdev", 00:19:02.390 "config": [ 00:19:02.390 { 00:19:02.390 "method": "bdev_set_options", 00:19:02.390 "params": { 00:19:02.390 "bdev_io_pool_size": 65535, 00:19:02.390 "bdev_io_cache_size": 256, 00:19:02.390 "bdev_auto_examine": true, 00:19:02.390 "iobuf_small_cache_size": 128, 00:19:02.390 "iobuf_large_cache_size": 16 00:19:02.390 } 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "method": "bdev_raid_set_options", 00:19:02.390 "params": { 00:19:02.390 "process_window_size_kb": 1024 00:19:02.390 } 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "method": "bdev_iscsi_set_options", 00:19:02.390 "params": { 00:19:02.390 "timeout_sec": 30 00:19:02.390 } 00:19:02.390 }, 00:19:02.390 { 00:19:02.390 "method": "bdev_nvme_set_options", 00:19:02.390 "params": { 00:19:02.390 "action_on_timeout": "none", 00:19:02.390 "timeout_us": 0, 00:19:02.390 "timeout_admin_us": 0, 00:19:02.390 "keep_alive_timeout_ms": 10000, 00:19:02.390 "arbitration_burst": 0, 00:19:02.390 "low_priority_weight": 0, 00:19:02.391 "medium_priority_weight": 0, 00:19:02.391 "high_priority_weight": 0, 00:19:02.391 "nvme_adminq_poll_period_us": 10000, 00:19:02.391 "nvme_ioq_poll_period_us": 0, 00:19:02.391 "io_queue_requests": 512, 00:19:02.391 "delay_cmd_submit": true, 00:19:02.391 "transport_retry_count": 4, 00:19:02.391 "bdev_retry_count": 3, 00:19:02.391 "transport_ack_timeout": 0, 00:19:02.391 "ctrlr_loss_timeout_sec": 0, 00:19:02.391 "reconnect_delay_sec": 0, 00:19:02.391 "fast_io_fail_timeout_sec": 0, 00:19:02.391 "disable_auto_failback": false, 00:19:02.391 "generate_uuids": false, 00:19:02.391 "transport_tos": 0, 00:19:02.391 "nvme_error_stat": false, 00:19:02.391 "rdma_srq_size": 0, 00:19:02.391 "io_path_stat": false, 00:19:02.391 "allow_accel_sequence": false, 00:19:02.391 "rdma_max_cq_size": 0, 00:19:02.391 "rdma_cm_event_timeout_ms": 0, 00:19:02.391 "dhchap_digests": [ 00:19:02.391 "sha256", 00:19:02.391 "sha384", 00:19:02.391 "sha512" 00:19:02.391 ], 00:19:02.391 "dhchap_dhgroups": [ 00:19:02.391 "null", 00:19:02.391 "ffdhe2048", 00:19:02.391 "ffdhe3072", 00:19:02.391 "ffdhe4096", 00:19:02.391 "ffdhe6144", 00:19:02.391 "ffdhe8192" 00:19:02.391 ] 00:19:02.391 } 00:19:02.391 }, 00:19:02.391 { 00:19:02.391 "method": "bdev_nvme_attach_controller", 00:19:02.391 "params": { 00:19:02.391 "name": "nvme0", 00:19:02.391 "trtype": "TCP", 00:19:02.391 "adrfam": "IPv4", 00:19:02.391 "traddr": "10.0.0.2", 00:19:02.391 "trsvcid": "4420", 00:19:02.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.391 "prchk_reftag": false, 00:19:02.391 "prchk_guard": false, 00:19:02.391 "ctrlr_loss_timeout_sec": 0, 00:19:02.391 "reconnect_delay_sec": 0, 00:19:02.391 "fast_io_fail_timeout_sec": 0, 00:19:02.391 "psk": "key0", 00:19:02.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.391 "hdgst": false, 00:19:02.391 "ddgst": false 00:19:02.391 } 00:19:02.391 }, 00:19:02.391 { 00:19:02.391 "method": "bdev_nvme_set_hotplug", 00:19:02.391 "params": { 00:19:02.391 "period_us": 100000, 00:19:02.391 "enable": false 00:19:02.391 } 00:19:02.391 }, 00:19:02.391 { 00:19:02.391 "method": "bdev_enable_histogram", 00:19:02.391 "params": { 00:19:02.391 "name": "nvme0n1", 00:19:02.391 "enable": true 00:19:02.391 } 00:19:02.391 }, 00:19:02.391 { 00:19:02.391 "method": "bdev_wait_for_examine" 00:19:02.391 } 00:19:02.391 ] 00:19:02.391 }, 00:19:02.391 { 00:19:02.391 "subsystem": "nbd", 00:19:02.391 "config": [] 00:19:02.391 } 00:19:02.391 ] 00:19:02.391 }' 00:19:02.391 [2024-04-27 02:38:35.891259] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:02.391 [2024-04-27 02:38:35.891318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146490 ] 00:19:02.391 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.391 [2024-04-27 02:38:35.949215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.652 [2024-04-27 02:38:36.011565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.652 [2024-04-27 02:38:36.142280] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.226 02:38:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:03.226 02:38:36 -- common/autotest_common.sh@850 -- # return 0 00:19:03.226 02:38:36 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.226 02:38:36 -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:03.226 02:38:36 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.226 02:38:36 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.486 Running I/O for 1 seconds... 00:19:04.430 00:19:04.430 Latency(us) 00:19:04.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.430 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:04.430 Verification LBA range: start 0x0 length 0x2000 00:19:04.430 nvme0n1 : 1.08 1319.70 5.16 0.00 0.00 93812.34 6662.83 148548.27 00:19:04.430 =================================================================================================================== 00:19:04.430 Total : 1319.70 5.16 0.00 0.00 93812.34 6662.83 148548.27 00:19:04.430 0 00:19:04.430 02:38:37 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:04.430 02:38:37 -- target/tls.sh@279 -- # cleanup 00:19:04.430 02:38:37 -- target/tls.sh@15 -- # process_shm --id 0 00:19:04.430 02:38:37 -- common/autotest_common.sh@794 -- # type=--id 00:19:04.430 02:38:37 -- common/autotest_common.sh@795 -- # id=0 00:19:04.430 02:38:37 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:04.430 02:38:37 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:04.430 02:38:38 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:04.430 02:38:38 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:04.430 02:38:38 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:04.430 02:38:38 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:04.430 nvmf_trace.0 00:19:04.691 02:38:38 -- common/autotest_common.sh@809 -- # return 0 00:19:04.691 02:38:38 -- target/tls.sh@16 -- # killprocess 146490 00:19:04.691 02:38:38 -- common/autotest_common.sh@936 -- # '[' -z 146490 ']' 00:19:04.691 02:38:38 -- common/autotest_common.sh@940 -- # kill -0 146490 00:19:04.691 02:38:38 -- common/autotest_common.sh@941 -- # uname 00:19:04.691 02:38:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:04.691 02:38:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146490 00:19:04.691 02:38:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:04.691 02:38:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:04.691 02:38:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146490' 00:19:04.691 killing process with pid 146490 00:19:04.691 02:38:38 -- common/autotest_common.sh@955 -- # kill 146490 00:19:04.691 Received shutdown signal, test time was about 1.000000 seconds 00:19:04.691 00:19:04.691 Latency(us) 00:19:04.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.691 =================================================================================================================== 00:19:04.691 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.691 02:38:38 -- common/autotest_common.sh@960 -- # wait 146490 00:19:04.691 02:38:38 -- target/tls.sh@17 -- # nvmftestfini 00:19:04.691 02:38:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:04.691 02:38:38 -- nvmf/common.sh@117 -- # sync 00:19:04.691 02:38:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.691 02:38:38 -- nvmf/common.sh@120 -- # set +e 00:19:04.692 02:38:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.692 02:38:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.692 rmmod nvme_tcp 00:19:04.692 rmmod nvme_fabrics 00:19:04.952 rmmod nvme_keyring 00:19:04.952 02:38:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.952 02:38:38 -- nvmf/common.sh@124 -- # set -e 00:19:04.952 02:38:38 -- nvmf/common.sh@125 -- # return 0 00:19:04.952 02:38:38 -- nvmf/common.sh@478 -- # '[' -n 146294 ']' 00:19:04.952 02:38:38 -- nvmf/common.sh@479 -- # killprocess 146294 00:19:04.952 02:38:38 -- common/autotest_common.sh@936 -- # '[' -z 146294 ']' 00:19:04.952 02:38:38 -- common/autotest_common.sh@940 -- # kill -0 146294 00:19:04.952 02:38:38 -- common/autotest_common.sh@941 -- # uname 00:19:04.952 02:38:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:04.952 02:38:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146294 00:19:04.952 02:38:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:04.952 02:38:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:04.952 02:38:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146294' 00:19:04.952 killing process with pid 146294 00:19:04.952 02:38:38 -- common/autotest_common.sh@955 -- # kill 146294 00:19:04.952 02:38:38 -- common/autotest_common.sh@960 -- # wait 146294 00:19:04.952 02:38:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:04.952 02:38:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:04.952 02:38:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:04.952 02:38:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.952 02:38:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.952 02:38:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.952 02:38:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.952 02:38:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.502 02:38:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:07.502 02:38:40 -- target/tls.sh@18 -- # rm -f /tmp/tmp.55RjcoDZEg /tmp/tmp.udalKqwlIN /tmp/tmp.86E7sGhww9 00:19:07.502 00:19:07.502 real 1m22.691s 00:19:07.502 user 2m5.092s 00:19:07.502 sys 0m28.532s 00:19:07.502 02:38:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:07.502 02:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.502 ************************************ 00:19:07.502 END TEST nvmf_tls 00:19:07.502 ************************************ 00:19:07.502 02:38:40 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:07.502 02:38:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:07.502 02:38:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.502 02:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.502 ************************************ 00:19:07.502 START TEST nvmf_fips 00:19:07.502 ************************************ 00:19:07.502 02:38:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:07.502 * Looking for test storage... 00:19:07.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:07.502 02:38:40 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.502 02:38:40 -- nvmf/common.sh@7 -- # uname -s 00:19:07.502 02:38:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.502 02:38:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.502 02:38:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.502 02:38:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.502 02:38:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.502 02:38:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.502 02:38:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.502 02:38:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.502 02:38:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.502 02:38:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.502 02:38:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.502 02:38:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.502 02:38:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.502 02:38:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.502 02:38:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.502 02:38:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.502 02:38:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.502 02:38:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.502 02:38:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.502 02:38:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.502 02:38:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.502 02:38:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.502 02:38:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.502 02:38:40 -- paths/export.sh@5 -- # export PATH 00:19:07.502 02:38:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.502 02:38:40 -- nvmf/common.sh@47 -- # : 0 00:19:07.502 02:38:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.502 02:38:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.502 02:38:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.502 02:38:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.502 02:38:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.502 02:38:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.502 02:38:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.502 02:38:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.502 02:38:40 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.502 02:38:40 -- fips/fips.sh@89 -- # check_openssl_version 00:19:07.502 02:38:40 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:07.502 02:38:40 -- fips/fips.sh@85 -- # openssl version 00:19:07.502 02:38:40 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:07.502 02:38:40 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:07.502 02:38:40 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:07.502 02:38:40 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:07.502 02:38:40 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:07.502 02:38:40 -- scripts/common.sh@333 -- # IFS=.-: 00:19:07.502 02:38:40 -- scripts/common.sh@333 -- # read -ra ver1 00:19:07.502 02:38:40 -- scripts/common.sh@334 -- # IFS=.-: 00:19:07.502 02:38:40 -- scripts/common.sh@334 -- # read -ra ver2 00:19:07.502 02:38:40 -- scripts/common.sh@335 -- # local 'op=>=' 00:19:07.502 02:38:40 -- scripts/common.sh@337 -- # ver1_l=3 00:19:07.502 02:38:40 -- scripts/common.sh@338 -- # ver2_l=3 00:19:07.502 02:38:40 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:07.502 02:38:40 -- scripts/common.sh@341 -- # case "$op" in 00:19:07.502 02:38:40 -- scripts/common.sh@345 -- # : 1 00:19:07.502 02:38:40 -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:07.502 02:38:40 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.502 02:38:40 -- scripts/common.sh@362 -- # decimal 3 00:19:07.502 02:38:40 -- scripts/common.sh@350 -- # local d=3 00:19:07.502 02:38:40 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:07.502 02:38:40 -- scripts/common.sh@352 -- # echo 3 00:19:07.502 02:38:40 -- scripts/common.sh@362 -- # ver1[v]=3 00:19:07.502 02:38:40 -- scripts/common.sh@363 -- # decimal 3 00:19:07.502 02:38:40 -- scripts/common.sh@350 -- # local d=3 00:19:07.502 02:38:40 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:07.502 02:38:40 -- scripts/common.sh@352 -- # echo 3 00:19:07.502 02:38:40 -- scripts/common.sh@363 -- # ver2[v]=3 00:19:07.502 02:38:40 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:07.502 02:38:40 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:07.502 02:38:40 -- scripts/common.sh@361 -- # (( v++ )) 00:19:07.502 02:38:40 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.502 02:38:40 -- scripts/common.sh@362 -- # decimal 0 00:19:07.502 02:38:40 -- scripts/common.sh@350 -- # local d=0 00:19:07.502 02:38:40 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:07.502 02:38:40 -- scripts/common.sh@352 -- # echo 0 00:19:07.502 02:38:40 -- scripts/common.sh@362 -- # ver1[v]=0 00:19:07.502 02:38:40 -- scripts/common.sh@363 -- # decimal 0 00:19:07.502 02:38:40 -- scripts/common.sh@350 -- # local d=0 00:19:07.502 02:38:40 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:07.502 02:38:40 -- scripts/common.sh@352 -- # echo 0 00:19:07.502 02:38:40 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:07.502 02:38:40 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:07.502 02:38:40 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:07.502 02:38:40 -- scripts/common.sh@361 -- # (( v++ )) 00:19:07.502 02:38:40 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.502 02:38:41 -- scripts/common.sh@362 -- # decimal 9 00:19:07.502 02:38:41 -- scripts/common.sh@350 -- # local d=9 00:19:07.502 02:38:41 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:07.502 02:38:41 -- scripts/common.sh@352 -- # echo 9 00:19:07.502 02:38:41 -- scripts/common.sh@362 -- # ver1[v]=9 00:19:07.502 02:38:41 -- scripts/common.sh@363 -- # decimal 0 00:19:07.502 02:38:41 -- scripts/common.sh@350 -- # local d=0 00:19:07.502 02:38:41 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:07.502 02:38:41 -- scripts/common.sh@352 -- # echo 0 00:19:07.502 02:38:41 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:07.502 02:38:41 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:07.502 02:38:41 -- scripts/common.sh@364 -- # return 0 00:19:07.502 02:38:41 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:07.502 02:38:41 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:07.502 02:38:41 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:07.502 02:38:41 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:07.502 02:38:41 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:07.502 02:38:41 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:07.502 02:38:41 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:07.502 02:38:41 -- fips/fips.sh@113 -- # build_openssl_config 00:19:07.502 02:38:41 -- fips/fips.sh@37 -- # cat 00:19:07.502 02:38:41 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:07.502 02:38:41 -- fips/fips.sh@58 -- # cat - 00:19:07.502 02:38:41 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:07.502 02:38:41 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:07.502 02:38:41 -- fips/fips.sh@116 -- # mapfile -t providers 00:19:07.502 02:38:41 -- fips/fips.sh@116 -- # openssl list -providers 00:19:07.502 02:38:41 -- fips/fips.sh@116 -- # grep name 00:19:07.502 02:38:41 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:07.502 02:38:41 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:07.502 02:38:41 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:07.503 02:38:41 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:07.503 02:38:41 -- common/autotest_common.sh@638 -- # local es=0 00:19:07.503 02:38:41 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:07.503 02:38:41 -- fips/fips.sh@127 -- # : 00:19:07.503 02:38:41 -- common/autotest_common.sh@626 -- # local arg=openssl 00:19:07.503 02:38:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:07.503 02:38:41 -- common/autotest_common.sh@630 -- # type -t openssl 00:19:07.503 02:38:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:07.503 02:38:41 -- common/autotest_common.sh@632 -- # type -P openssl 00:19:07.503 02:38:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:07.503 02:38:41 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:19:07.503 02:38:41 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:19:07.503 02:38:41 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:19:07.765 Error setting digest 00:19:07.765 0082409CB57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:07.765 0082409CB57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:07.765 02:38:41 -- common/autotest_common.sh@641 -- # es=1 00:19:07.765 02:38:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:07.765 02:38:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:07.765 02:38:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:07.765 02:38:41 -- fips/fips.sh@130 -- # nvmftestinit 00:19:07.765 02:38:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:07.765 02:38:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.765 02:38:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:07.765 02:38:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:07.765 02:38:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:07.765 02:38:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.765 02:38:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.765 02:38:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.765 02:38:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:07.765 02:38:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:07.765 02:38:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.765 02:38:41 -- common/autotest_common.sh@10 -- # set +x 00:19:14.358 02:38:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:14.358 02:38:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.358 02:38:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.358 02:38:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.358 02:38:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.358 02:38:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.358 02:38:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.358 02:38:47 -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.358 02:38:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.358 02:38:47 -- nvmf/common.sh@296 -- # e810=() 00:19:14.358 02:38:47 -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.358 02:38:47 -- nvmf/common.sh@297 -- # x722=() 00:19:14.358 02:38:47 -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.358 02:38:47 -- nvmf/common.sh@298 -- # mlx=() 00:19:14.358 02:38:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.358 02:38:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.358 02:38:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.358 02:38:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.358 02:38:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.358 02:38:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.358 02:38:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:14.358 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:14.358 02:38:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.358 02:38:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:14.358 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:14.358 02:38:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.358 02:38:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.358 02:38:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.358 02:38:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:14.358 02:38:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.358 02:38:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:14.358 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:14.358 02:38:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.358 02:38:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.358 02:38:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.358 02:38:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:14.358 02:38:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.358 02:38:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:14.358 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:14.358 02:38:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.358 02:38:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:14.358 02:38:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:14.358 02:38:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:14.358 02:38:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:14.358 02:38:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.358 02:38:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.358 02:38:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.358 02:38:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.358 02:38:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.358 02:38:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.358 02:38:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.358 02:38:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.358 02:38:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.358 02:38:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.358 02:38:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.358 02:38:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.358 02:38:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.358 02:38:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.358 02:38:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.358 02:38:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.358 02:38:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.358 02:38:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.358 02:38:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.358 02:38:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:19:14.358 00:19:14.358 --- 10.0.0.2 ping statistics --- 00:19:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.358 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:19:14.358 02:38:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:19:14.358 00:19:14.358 --- 10.0.0.1 ping statistics --- 00:19:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.358 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:19:14.358 02:38:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.358 02:38:47 -- nvmf/common.sh@411 -- # return 0 00:19:14.359 02:38:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:14.359 02:38:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.359 02:38:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:14.359 02:38:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:14.359 02:38:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.359 02:38:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:14.359 02:38:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:14.359 02:38:47 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:14.359 02:38:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:14.359 02:38:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:14.359 02:38:47 -- common/autotest_common.sh@10 -- # set +x 00:19:14.359 02:38:47 -- nvmf/common.sh@470 -- # nvmfpid=151027 00:19:14.359 02:38:47 -- nvmf/common.sh@471 -- # waitforlisten 151027 00:19:14.359 02:38:47 -- common/autotest_common.sh@817 -- # '[' -z 151027 ']' 00:19:14.359 02:38:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.359 02:38:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:14.359 02:38:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.359 02:38:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:14.359 02:38:47 -- common/autotest_common.sh@10 -- # set +x 00:19:14.359 02:38:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:14.359 [2024-04-27 02:38:47.794137] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:14.359 [2024-04-27 02:38:47.794214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.359 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.359 [2024-04-27 02:38:47.865620] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.359 [2024-04-27 02:38:47.937664] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.359 [2024-04-27 02:38:47.937697] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.359 [2024-04-27 02:38:47.937704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.359 [2024-04-27 02:38:47.937711] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.359 [2024-04-27 02:38:47.937716] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.359 [2024-04-27 02:38:47.937736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.932 02:38:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:14.932 02:38:48 -- common/autotest_common.sh@850 -- # return 0 00:19:14.932 02:38:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:14.932 02:38:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:14.932 02:38:48 -- common/autotest_common.sh@10 -- # set +x 00:19:15.193 02:38:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.193 02:38:48 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:15.193 02:38:48 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:15.193 02:38:48 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.193 02:38:48 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:15.193 02:38:48 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.193 02:38:48 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.193 02:38:48 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:15.193 02:38:48 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.194 [2024-04-27 02:38:48.725812] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.194 [2024-04-27 02:38:48.741819] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.194 [2024-04-27 02:38:48.741972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.194 [2024-04-27 02:38:48.768522] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:15.194 malloc0 00:19:15.194 02:38:48 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.194 02:38:48 -- fips/fips.sh@147 -- # bdevperf_pid=151370 00:19:15.194 02:38:48 -- fips/fips.sh@148 -- # waitforlisten 151370 /var/tmp/bdevperf.sock 00:19:15.194 02:38:48 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.194 02:38:48 -- common/autotest_common.sh@817 -- # '[' -z 151370 ']' 00:19:15.194 02:38:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.194 02:38:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:15.194 02:38:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.194 02:38:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:15.194 02:38:48 -- common/autotest_common.sh@10 -- # set +x 00:19:15.455 [2024-04-27 02:38:48.851231] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:15.455 [2024-04-27 02:38:48.851290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151370 ] 00:19:15.455 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.455 [2024-04-27 02:38:48.901181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.455 [2024-04-27 02:38:48.951872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.028 02:38:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.028 02:38:49 -- common/autotest_common.sh@850 -- # return 0 00:19:16.028 02:38:49 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:16.289 [2024-04-27 02:38:49.729051] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.290 [2024-04-27 02:38:49.729116] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.290 TLSTESTn1 00:19:16.290 02:38:49 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.290 Running I/O for 10 seconds... 00:19:28.525 00:19:28.526 Latency(us) 00:19:28.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.526 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.526 Verification LBA range: start 0x0 length 0x2000 00:19:28.526 TLSTESTn1 : 10.09 1868.57 7.30 0.00 0.00 68241.55 6253.23 164276.91 00:19:28.526 =================================================================================================================== 00:19:28.526 Total : 1868.57 7.30 0.00 0.00 68241.55 6253.23 164276.91 00:19:28.526 0 00:19:28.526 02:39:00 -- fips/fips.sh@1 -- # cleanup 00:19:28.526 02:39:00 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:28.526 02:39:00 -- common/autotest_common.sh@794 -- # type=--id 00:19:28.526 02:39:00 -- common/autotest_common.sh@795 -- # id=0 00:19:28.526 02:39:00 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:28.526 02:39:00 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:28.526 02:39:00 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:28.526 02:39:00 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:28.526 02:39:00 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:28.526 nvmf_trace.0 00:19:28.526 02:39:00 -- common/autotest_common.sh@809 -- # return 0 00:19:28.526 02:39:00 -- fips/fips.sh@16 -- # killprocess 151370 00:19:28.526 02:39:00 -- common/autotest_common.sh@936 -- # '[' -z 151370 ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@940 -- # kill -0 151370 00:19:28.526 02:39:00 -- common/autotest_common.sh@941 -- # uname 00:19:28.526 02:39:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151370 00:19:28.526 02:39:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:28.526 02:39:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151370' 00:19:28.526 killing process with pid 151370 00:19:28.526 02:39:00 -- common/autotest_common.sh@955 -- # kill 151370 00:19:28.526 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.526 00:19:28.526 Latency(us) 00:19:28.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.526 =================================================================================================================== 00:19:28.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.526 [2024-04-27 02:39:00.158088] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:28.526 02:39:00 -- common/autotest_common.sh@960 -- # wait 151370 00:19:28.526 02:39:00 -- fips/fips.sh@17 -- # nvmftestfini 00:19:28.526 02:39:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:28.526 02:39:00 -- nvmf/common.sh@117 -- # sync 00:19:28.526 02:39:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:28.526 02:39:00 -- nvmf/common.sh@120 -- # set +e 00:19:28.526 02:39:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:28.526 02:39:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:28.526 rmmod nvme_tcp 00:19:28.526 rmmod nvme_fabrics 00:19:28.526 rmmod nvme_keyring 00:19:28.526 02:39:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:28.526 02:39:00 -- nvmf/common.sh@124 -- # set -e 00:19:28.526 02:39:00 -- nvmf/common.sh@125 -- # return 0 00:19:28.526 02:39:00 -- nvmf/common.sh@478 -- # '[' -n 151027 ']' 00:19:28.526 02:39:00 -- nvmf/common.sh@479 -- # killprocess 151027 00:19:28.526 02:39:00 -- common/autotest_common.sh@936 -- # '[' -z 151027 ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@940 -- # kill -0 151027 00:19:28.526 02:39:00 -- common/autotest_common.sh@941 -- # uname 00:19:28.526 02:39:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151027 00:19:28.526 02:39:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:28.526 02:39:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:28.526 02:39:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151027' 00:19:28.526 killing process with pid 151027 00:19:28.526 02:39:00 -- common/autotest_common.sh@955 -- # kill 151027 00:19:28.526 [2024-04-27 02:39:00.397120] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:28.526 02:39:00 -- common/autotest_common.sh@960 -- # wait 151027 00:19:28.526 02:39:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:28.526 02:39:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:28.526 02:39:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:28.526 02:39:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.526 02:39:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:28.526 02:39:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.526 02:39:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.526 02:39:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.097 02:39:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:29.097 02:39:02 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.097 00:19:29.097 real 0m21.802s 00:19:29.097 user 0m22.513s 00:19:29.097 sys 0m9.735s 00:19:29.097 02:39:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:29.097 02:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:29.097 ************************************ 00:19:29.097 END TEST nvmf_fips 00:19:29.097 ************************************ 00:19:29.097 02:39:02 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:29.097 02:39:02 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:29.097 02:39:02 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:29.097 02:39:02 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:29.097 02:39:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.097 02:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:37.243 02:39:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.243 02:39:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.243 02:39:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.243 02:39:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.243 02:39:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.243 02:39:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.243 02:39:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.243 02:39:09 -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.243 02:39:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.243 02:39:09 -- nvmf/common.sh@296 -- # e810=() 00:19:37.243 02:39:09 -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.243 02:39:09 -- nvmf/common.sh@297 -- # x722=() 00:19:37.243 02:39:09 -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.244 02:39:09 -- nvmf/common.sh@298 -- # mlx=() 00:19:37.244 02:39:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.244 02:39:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.244 02:39:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.244 02:39:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.244 02:39:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.244 02:39:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.244 02:39:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:37.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:37.244 02:39:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.244 02:39:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:37.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:37.244 02:39:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.244 02:39:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.244 02:39:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.244 02:39:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.244 02:39:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.244 02:39:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:37.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:37.244 02:39:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.244 02:39:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.244 02:39:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.244 02:39:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:37.244 02:39:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.244 02:39:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:37.244 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:37.244 02:39:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.244 02:39:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:37.244 02:39:09 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.244 02:39:09 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:37.244 02:39:09 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:37.244 02:39:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:37.244 02:39:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.244 02:39:09 -- common/autotest_common.sh@10 -- # set +x 00:19:37.244 ************************************ 00:19:37.244 START TEST nvmf_perf_adq 00:19:37.244 ************************************ 00:19:37.244 02:39:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:37.244 * Looking for test storage... 00:19:37.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.244 02:39:09 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.244 02:39:09 -- nvmf/common.sh@7 -- # uname -s 00:19:37.244 02:39:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.244 02:39:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.244 02:39:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.244 02:39:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.244 02:39:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.244 02:39:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.244 02:39:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.244 02:39:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.244 02:39:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.244 02:39:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.244 02:39:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.244 02:39:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.244 02:39:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.244 02:39:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.244 02:39:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.244 02:39:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.244 02:39:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.244 02:39:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.244 02:39:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.244 02:39:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.244 02:39:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.244 02:39:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.244 02:39:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.244 02:39:09 -- paths/export.sh@5 -- # export PATH 00:19:37.244 02:39:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.244 02:39:09 -- nvmf/common.sh@47 -- # : 0 00:19:37.244 02:39:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.244 02:39:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.244 02:39:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.244 02:39:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.244 02:39:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.244 02:39:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.244 02:39:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.244 02:39:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.244 02:39:09 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:37.244 02:39:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.244 02:39:09 -- common/autotest_common.sh@10 -- # set +x 00:19:43.837 02:39:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.837 02:39:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.837 02:39:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.837 02:39:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.837 02:39:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.837 02:39:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.837 02:39:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.837 02:39:16 -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.837 02:39:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.837 02:39:16 -- nvmf/common.sh@296 -- # e810=() 00:19:43.837 02:39:16 -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.837 02:39:16 -- nvmf/common.sh@297 -- # x722=() 00:19:43.837 02:39:16 -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.837 02:39:16 -- nvmf/common.sh@298 -- # mlx=() 00:19:43.837 02:39:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.837 02:39:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.837 02:39:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.837 02:39:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.837 02:39:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.837 02:39:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.837 02:39:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.837 02:39:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.837 02:39:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.837 02:39:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:43.837 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:43.837 02:39:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.837 02:39:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.837 02:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.837 02:39:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.838 02:39:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:43.838 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:43.838 02:39:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.838 02:39:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.838 02:39:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.838 02:39:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.838 02:39:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:43.838 02:39:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.838 02:39:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:43.838 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:43.838 02:39:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.838 02:39:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.838 02:39:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.838 02:39:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:43.838 02:39:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.838 02:39:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:43.838 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:43.838 02:39:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.838 02:39:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:43.838 02:39:16 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.838 02:39:16 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:43.838 02:39:16 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:43.838 02:39:16 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:43.838 02:39:16 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:44.419 02:39:17 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:46.404 02:39:19 -- target/perf_adq.sh@54 -- # sleep 5 00:19:51.695 02:39:24 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:51.695 02:39:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:51.695 02:39:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.695 02:39:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:51.695 02:39:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:51.695 02:39:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:51.695 02:39:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.695 02:39:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.695 02:39:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.695 02:39:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:51.695 02:39:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.695 02:39:24 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 02:39:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:51.695 02:39:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.695 02:39:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.695 02:39:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.695 02:39:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.695 02:39:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.695 02:39:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.695 02:39:24 -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.695 02:39:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.695 02:39:24 -- nvmf/common.sh@296 -- # e810=() 00:19:51.695 02:39:24 -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.695 02:39:24 -- nvmf/common.sh@297 -- # x722=() 00:19:51.695 02:39:24 -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.695 02:39:24 -- nvmf/common.sh@298 -- # mlx=() 00:19:51.695 02:39:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.695 02:39:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.695 02:39:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.695 02:39:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.695 02:39:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.695 02:39:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.695 02:39:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:51.695 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:51.695 02:39:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.695 02:39:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:51.695 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:51.695 02:39:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.695 02:39:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.695 02:39:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.695 02:39:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.695 02:39:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.695 02:39:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:51.695 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:51.695 02:39:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.695 02:39:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.695 02:39:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.695 02:39:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.695 02:39:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.695 02:39:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:51.695 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:51.695 02:39:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.695 02:39:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:51.695 02:39:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:51.695 02:39:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:51.695 02:39:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:51.695 02:39:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.695 02:39:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.695 02:39:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.695 02:39:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.695 02:39:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.695 02:39:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.695 02:39:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.695 02:39:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.695 02:39:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.695 02:39:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.695 02:39:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.695 02:39:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.695 02:39:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.695 02:39:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.695 02:39:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.695 02:39:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.695 02:39:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.695 02:39:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.696 02:39:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.696 02:39:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:19:51.696 00:19:51.696 --- 10.0.0.2 ping statistics --- 00:19:51.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.696 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:19:51.696 02:39:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.435 ms 00:19:51.696 00:19:51.696 --- 10.0.0.1 ping statistics --- 00:19:51.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.696 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:19:51.696 02:39:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.696 02:39:24 -- nvmf/common.sh@411 -- # return 0 00:19:51.696 02:39:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:51.696 02:39:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.696 02:39:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:51.696 02:39:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:51.696 02:39:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.696 02:39:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:51.696 02:39:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:51.696 02:39:24 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:51.696 02:39:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:51.696 02:39:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:51.696 02:39:24 -- common/autotest_common.sh@10 -- # set +x 00:19:51.696 02:39:24 -- nvmf/common.sh@470 -- # nvmfpid=163832 00:19:51.696 02:39:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:51.696 02:39:24 -- nvmf/common.sh@471 -- # waitforlisten 163832 00:19:51.696 02:39:24 -- common/autotest_common.sh@817 -- # '[' -z 163832 ']' 00:19:51.696 02:39:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.696 02:39:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.696 02:39:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.696 02:39:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.696 02:39:24 -- common/autotest_common.sh@10 -- # set +x 00:19:51.696 [2024-04-27 02:39:25.047357] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:51.696 [2024-04-27 02:39:25.047406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.696 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.696 [2024-04-27 02:39:25.111579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.696 [2024-04-27 02:39:25.175915] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.696 [2024-04-27 02:39:25.175951] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.696 [2024-04-27 02:39:25.175961] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.696 [2024-04-27 02:39:25.175969] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.696 [2024-04-27 02:39:25.175976] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.696 [2024-04-27 02:39:25.176092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.696 [2024-04-27 02:39:25.176115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.696 [2024-04-27 02:39:25.176235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.696 [2024-04-27 02:39:25.176237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.268 02:39:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.268 02:39:25 -- common/autotest_common.sh@850 -- # return 0 00:19:52.268 02:39:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:52.268 02:39:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:52.268 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.268 02:39:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.268 02:39:25 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:52.268 02:39:25 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:52.268 02:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.268 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.268 02:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.268 02:39:25 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:52.268 02:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.268 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.529 02:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.529 02:39:25 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:52.529 02:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.529 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.529 [2024-04-27 02:39:25.950207] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.529 02:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.529 02:39:25 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:52.529 02:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.529 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.529 Malloc1 00:19:52.529 02:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.529 02:39:25 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.529 02:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.529 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.529 02:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.529 02:39:25 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.529 02:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.529 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:52.529 02:39:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.529 02:39:26 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.529 02:39:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.529 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:52.529 [2024-04-27 02:39:26.009552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.529 02:39:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.529 02:39:26 -- target/perf_adq.sh@73 -- # perfpid=164071 00:19:52.529 02:39:26 -- target/perf_adq.sh@74 -- # sleep 2 00:19:52.529 02:39:26 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:52.529 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.443 02:39:28 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:54.443 02:39:28 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:54.443 02:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.443 02:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:54.443 02:39:28 -- target/perf_adq.sh@76 -- # wc -l 00:19:54.443 02:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.703 02:39:28 -- target/perf_adq.sh@76 -- # count=4 00:19:54.703 02:39:28 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:54.703 02:39:28 -- target/perf_adq.sh@81 -- # wait 164071 00:20:02.841 [2024-04-27 02:39:36.176353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7abe0 is same with the state(5) to be set 00:20:02.841 Initializing NVMe Controllers 00:20:02.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:02.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:02.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:02.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:02.841 Initialization complete. Launching workers. 00:20:02.841 ======================================================== 00:20:02.841 Latency(us) 00:20:02.841 Device Information : IOPS MiB/s Average min max 00:20:02.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9817.50 38.35 6519.28 2465.84 10059.02 00:20:02.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15719.20 61.40 4070.90 964.13 12724.18 00:20:02.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10261.30 40.08 6237.12 2688.39 12777.67 00:20:02.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9866.70 38.54 6486.38 2497.69 13038.88 00:20:02.842 ======================================================== 00:20:02.842 Total : 45664.70 178.38 5605.96 964.13 13038.88 00:20:02.842 00:20:02.842 02:39:36 -- target/perf_adq.sh@82 -- # nvmftestfini 00:20:02.842 02:39:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:02.842 02:39:36 -- nvmf/common.sh@117 -- # sync 00:20:02.842 02:39:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.842 02:39:36 -- nvmf/common.sh@120 -- # set +e 00:20:02.842 02:39:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.842 02:39:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.842 rmmod nvme_tcp 00:20:02.842 rmmod nvme_fabrics 00:20:02.842 rmmod nvme_keyring 00:20:02.842 02:39:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.842 02:39:36 -- nvmf/common.sh@124 -- # set -e 00:20:02.842 02:39:36 -- nvmf/common.sh@125 -- # return 0 00:20:02.842 02:39:36 -- nvmf/common.sh@478 -- # '[' -n 163832 ']' 00:20:02.842 02:39:36 -- nvmf/common.sh@479 -- # killprocess 163832 00:20:02.842 02:39:36 -- common/autotest_common.sh@936 -- # '[' -z 163832 ']' 00:20:02.842 02:39:36 -- common/autotest_common.sh@940 -- # kill -0 163832 00:20:02.842 02:39:36 -- common/autotest_common.sh@941 -- # uname 00:20:02.842 02:39:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.842 02:39:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 163832 00:20:02.842 02:39:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.842 02:39:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.842 02:39:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 163832' 00:20:02.842 killing process with pid 163832 00:20:02.842 02:39:36 -- common/autotest_common.sh@955 -- # kill 163832 00:20:02.842 02:39:36 -- common/autotest_common.sh@960 -- # wait 163832 00:20:02.842 02:39:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:02.842 02:39:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:02.842 02:39:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:02.842 02:39:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.842 02:39:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.842 02:39:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.842 02:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.842 02:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.389 02:39:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.389 02:39:38 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:20:05.390 02:39:38 -- target/perf_adq.sh@52 -- # rmmod ice 00:20:06.776 02:39:40 -- target/perf_adq.sh@53 -- # modprobe ice 00:20:08.164 02:39:41 -- target/perf_adq.sh@54 -- # sleep 5 00:20:13.453 02:39:46 -- target/perf_adq.sh@87 -- # nvmftestinit 00:20:13.453 02:39:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:13.453 02:39:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.453 02:39:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:13.453 02:39:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:13.453 02:39:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:13.453 02:39:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.453 02:39:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.453 02:39:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.453 02:39:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:13.453 02:39:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.453 02:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.453 02:39:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.453 02:39:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.453 02:39:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.453 02:39:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.453 02:39:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.453 02:39:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.453 02:39:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.453 02:39:46 -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.453 02:39:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.453 02:39:46 -- nvmf/common.sh@296 -- # e810=() 00:20:13.453 02:39:46 -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.453 02:39:46 -- nvmf/common.sh@297 -- # x722=() 00:20:13.453 02:39:46 -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.453 02:39:46 -- nvmf/common.sh@298 -- # mlx=() 00:20:13.453 02:39:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.453 02:39:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.453 02:39:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.453 02:39:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.453 02:39:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.453 02:39:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.453 02:39:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:13.453 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:13.453 02:39:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.453 02:39:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:13.453 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:13.453 02:39:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.453 02:39:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.453 02:39:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.453 02:39:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.453 02:39:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.453 02:39:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:13.453 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:13.453 02:39:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.453 02:39:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.453 02:39:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.453 02:39:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.453 02:39:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.453 02:39:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:13.453 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:13.453 02:39:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.453 02:39:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:13.453 02:39:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:13.453 02:39:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:13.453 02:39:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:13.453 02:39:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.453 02:39:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.453 02:39:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.453 02:39:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.453 02:39:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.453 02:39:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.453 02:39:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.453 02:39:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.453 02:39:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.453 02:39:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.453 02:39:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.453 02:39:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.453 02:39:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.453 02:39:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.453 02:39:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.453 02:39:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.453 02:39:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.454 02:39:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.454 02:39:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.454 02:39:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:20:13.454 00:20:13.454 --- 10.0.0.2 ping statistics --- 00:20:13.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.454 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:20:13.454 02:39:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:20:13.454 00:20:13.454 --- 10.0.0.1 ping statistics --- 00:20:13.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.454 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:20:13.454 02:39:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.454 02:39:47 -- nvmf/common.sh@411 -- # return 0 00:20:13.454 02:39:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:13.454 02:39:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.454 02:39:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:13.454 02:39:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:13.715 02:39:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.715 02:39:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:13.715 02:39:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:13.715 02:39:47 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:20:13.715 02:39:47 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:13.715 02:39:47 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:13.715 02:39:47 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:13.715 net.core.busy_poll = 1 00:20:13.715 02:39:47 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:13.715 net.core.busy_read = 1 00:20:13.715 02:39:47 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:13.716 02:39:47 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:13.716 02:39:47 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:13.716 02:39:47 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:13.716 02:39:47 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:13.978 02:39:47 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:13.978 02:39:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:13.978 02:39:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:13.978 02:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:13.978 02:39:47 -- nvmf/common.sh@470 -- # nvmfpid=168650 00:20:13.978 02:39:47 -- nvmf/common.sh@471 -- # waitforlisten 168650 00:20:13.978 02:39:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:13.978 02:39:47 -- common/autotest_common.sh@817 -- # '[' -z 168650 ']' 00:20:13.978 02:39:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.978 02:39:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.978 02:39:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.978 02:39:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.978 02:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:13.978 [2024-04-27 02:39:47.418852] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:13.978 [2024-04-27 02:39:47.418901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.978 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.978 [2024-04-27 02:39:47.486916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.978 [2024-04-27 02:39:47.550545] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.978 [2024-04-27 02:39:47.550585] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.978 [2024-04-27 02:39:47.550593] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.978 [2024-04-27 02:39:47.550602] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.978 [2024-04-27 02:39:47.550608] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.978 [2024-04-27 02:39:47.550768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.978 [2024-04-27 02:39:47.550890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.978 [2024-04-27 02:39:47.551029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.978 [2024-04-27 02:39:47.551032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.923 02:39:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.923 02:39:48 -- common/autotest_common.sh@850 -- # return 0 00:20:14.923 02:39:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.923 02:39:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 02:39:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.923 02:39:48 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:20:14.923 02:39:48 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 [2024-04-27 02:39:48.326222] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 Malloc1 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.923 02:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.923 02:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:14.923 [2024-04-27 02:39:48.381582] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.923 02:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.923 02:39:48 -- target/perf_adq.sh@94 -- # perfpid=169000 00:20:14.923 02:39:48 -- target/perf_adq.sh@95 -- # sleep 2 00:20:14.923 02:39:48 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:14.923 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.840 02:39:50 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:20:16.840 02:39:50 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:16.840 02:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.840 02:39:50 -- target/perf_adq.sh@97 -- # wc -l 00:20:16.840 02:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 02:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.840 02:39:50 -- target/perf_adq.sh@97 -- # count=2 00:20:16.840 02:39:50 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:20:16.840 02:39:50 -- target/perf_adq.sh@103 -- # wait 169000 00:20:24.978 Initializing NVMe Controllers 00:20:24.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:24.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:24.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:24.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:24.978 Initialization complete. Launching workers. 00:20:24.978 ======================================================== 00:20:24.978 Latency(us) 00:20:24.978 Device Information : IOPS MiB/s Average min max 00:20:24.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6767.20 26.43 9460.66 1490.80 54695.77 00:20:24.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7608.18 29.72 8415.38 1406.27 53545.84 00:20:24.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11839.22 46.25 5405.61 1565.82 10076.22 00:20:24.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7066.39 27.60 9088.09 1297.32 54972.67 00:20:24.978 ======================================================== 00:20:24.978 Total : 33280.99 130.00 7700.08 1297.32 54972.67 00:20:24.978 00:20:24.978 02:39:58 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:24.978 02:39:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:24.978 02:39:58 -- nvmf/common.sh@117 -- # sync 00:20:24.978 02:39:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.978 02:39:58 -- nvmf/common.sh@120 -- # set +e 00:20:24.978 02:39:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.978 02:39:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.978 rmmod nvme_tcp 00:20:25.238 rmmod nvme_fabrics 00:20:25.238 rmmod nvme_keyring 00:20:25.238 02:39:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.238 02:39:58 -- nvmf/common.sh@124 -- # set -e 00:20:25.238 02:39:58 -- nvmf/common.sh@125 -- # return 0 00:20:25.238 02:39:58 -- nvmf/common.sh@478 -- # '[' -n 168650 ']' 00:20:25.238 02:39:58 -- nvmf/common.sh@479 -- # killprocess 168650 00:20:25.238 02:39:58 -- common/autotest_common.sh@936 -- # '[' -z 168650 ']' 00:20:25.238 02:39:58 -- common/autotest_common.sh@940 -- # kill -0 168650 00:20:25.238 02:39:58 -- common/autotest_common.sh@941 -- # uname 00:20:25.238 02:39:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.238 02:39:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 168650 00:20:25.238 02:39:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.238 02:39:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.238 02:39:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 168650' 00:20:25.238 killing process with pid 168650 00:20:25.238 02:39:58 -- common/autotest_common.sh@955 -- # kill 168650 00:20:25.238 02:39:58 -- common/autotest_common.sh@960 -- # wait 168650 00:20:25.500 02:39:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:25.500 02:39:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:25.500 02:39:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:25.500 02:39:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.500 02:39:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.500 02:39:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.500 02:39:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.500 02:39:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.802 02:40:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.802 02:40:01 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:28.802 00:20:28.802 real 0m52.381s 00:20:28.802 user 2m49.155s 00:20:28.802 sys 0m10.495s 00:20:28.802 02:40:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:28.802 02:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:28.802 ************************************ 00:20:28.802 END TEST nvmf_perf_adq 00:20:28.802 ************************************ 00:20:28.802 02:40:01 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:28.802 02:40:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:28.802 02:40:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:28.802 02:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:28.802 ************************************ 00:20:28.802 START TEST nvmf_shutdown 00:20:28.802 ************************************ 00:20:28.802 02:40:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:28.802 * Looking for test storage... 00:20:28.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.802 02:40:02 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.802 02:40:02 -- nvmf/common.sh@7 -- # uname -s 00:20:28.803 02:40:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.803 02:40:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.803 02:40:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.803 02:40:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.803 02:40:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.803 02:40:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.803 02:40:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.803 02:40:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.803 02:40:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.803 02:40:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.803 02:40:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.803 02:40:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.803 02:40:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.803 02:40:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.803 02:40:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.803 02:40:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.803 02:40:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.803 02:40:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.803 02:40:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.803 02:40:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.803 02:40:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.803 02:40:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.803 02:40:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.803 02:40:02 -- paths/export.sh@5 -- # export PATH 00:20:28.803 02:40:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.803 02:40:02 -- nvmf/common.sh@47 -- # : 0 00:20:28.803 02:40:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.803 02:40:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.803 02:40:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.803 02:40:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.803 02:40:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.803 02:40:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.803 02:40:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.803 02:40:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.803 02:40:02 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.803 02:40:02 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.803 02:40:02 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:28.803 02:40:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:28.803 02:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:28.803 02:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:28.803 ************************************ 00:20:28.803 START TEST nvmf_shutdown_tc1 00:20:28.803 ************************************ 00:20:28.803 02:40:02 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:28.803 02:40:02 -- target/shutdown.sh@74 -- # starttarget 00:20:28.803 02:40:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:28.803 02:40:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:28.803 02:40:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.803 02:40:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:28.803 02:40:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:28.803 02:40:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:28.803 02:40:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.803 02:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.803 02:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.803 02:40:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:29.064 02:40:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:29.064 02:40:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.064 02:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:35.669 02:40:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:35.669 02:40:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.669 02:40:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.669 02:40:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.669 02:40:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.669 02:40:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.669 02:40:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.669 02:40:09 -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.669 02:40:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.669 02:40:09 -- nvmf/common.sh@296 -- # e810=() 00:20:35.669 02:40:09 -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.669 02:40:09 -- nvmf/common.sh@297 -- # x722=() 00:20:35.669 02:40:09 -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.669 02:40:09 -- nvmf/common.sh@298 -- # mlx=() 00:20:35.669 02:40:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.669 02:40:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.669 02:40:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.669 02:40:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:35.669 02:40:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.669 02:40:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.669 02:40:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:35.669 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:35.669 02:40:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.669 02:40:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:35.669 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:35.669 02:40:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.669 02:40:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:35.669 02:40:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:35.670 02:40:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.670 02:40:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.670 02:40:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:35.670 02:40:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.670 02:40:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:35.670 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:35.670 02:40:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.670 02:40:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.670 02:40:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.670 02:40:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:35.670 02:40:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.670 02:40:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:35.670 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:35.670 02:40:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.670 02:40:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:35.670 02:40:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:35.670 02:40:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:35.670 02:40:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:35.670 02:40:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:35.670 02:40:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.670 02:40:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.670 02:40:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.670 02:40:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:35.670 02:40:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.670 02:40:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.670 02:40:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:35.670 02:40:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.670 02:40:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.670 02:40:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:35.670 02:40:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:35.670 02:40:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.670 02:40:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.670 02:40:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.670 02:40:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.670 02:40:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:35.670 02:40:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.931 02:40:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.931 02:40:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.931 02:40:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:35.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:20:35.931 00:20:35.931 --- 10.0.0.2 ping statistics --- 00:20:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.931 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:20:35.931 02:40:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:20:35.931 00:20:35.931 --- 10.0.0.1 ping statistics --- 00:20:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.931 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:20:35.931 02:40:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.931 02:40:09 -- nvmf/common.sh@411 -- # return 0 00:20:35.931 02:40:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:35.931 02:40:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.931 02:40:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:35.931 02:40:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:35.931 02:40:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.931 02:40:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:35.931 02:40:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:35.931 02:40:09 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:35.931 02:40:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:35.931 02:40:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.931 02:40:09 -- common/autotest_common.sh@10 -- # set +x 00:20:35.931 02:40:09 -- nvmf/common.sh@470 -- # nvmfpid=175469 00:20:35.931 02:40:09 -- nvmf/common.sh@471 -- # waitforlisten 175469 00:20:35.931 02:40:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.931 02:40:09 -- common/autotest_common.sh@817 -- # '[' -z 175469 ']' 00:20:35.931 02:40:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.931 02:40:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.931 02:40:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.931 02:40:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.931 02:40:09 -- common/autotest_common.sh@10 -- # set +x 00:20:35.931 [2024-04-27 02:40:09.484619] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:35.931 [2024-04-27 02:40:09.484682] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.931 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.219 [2024-04-27 02:40:09.556297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.219 [2024-04-27 02:40:09.628024] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.219 [2024-04-27 02:40:09.628061] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.219 [2024-04-27 02:40:09.628070] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.219 [2024-04-27 02:40:09.628079] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.219 [2024-04-27 02:40:09.628086] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.219 [2024-04-27 02:40:09.628197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.219 [2024-04-27 02:40:09.628327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.219 [2024-04-27 02:40:09.628487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.219 [2024-04-27 02:40:09.628488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:36.861 02:40:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.861 02:40:10 -- common/autotest_common.sh@850 -- # return 0 00:20:36.861 02:40:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:36.861 02:40:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.861 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.861 02:40:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.861 02:40:10 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.861 02:40:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.861 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.861 [2024-04-27 02:40:10.305878] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.861 02:40:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.861 02:40:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:36.861 02:40:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:36.861 02:40:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:36.861 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.861 02:40:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.861 02:40:10 -- target/shutdown.sh@28 -- # cat 00:20:36.861 02:40:10 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:36.861 02:40:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.861 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.861 Malloc1 00:20:36.861 [2024-04-27 02:40:10.409233] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.861 Malloc2 00:20:36.861 Malloc3 00:20:37.122 Malloc4 00:20:37.122 Malloc5 00:20:37.122 Malloc6 00:20:37.122 Malloc7 00:20:37.122 Malloc8 00:20:37.122 Malloc9 00:20:37.383 Malloc10 00:20:37.383 02:40:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.383 02:40:10 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:37.383 02:40:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:37.383 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:37.383 02:40:10 -- target/shutdown.sh@78 -- # perfpid=175747 00:20:37.383 02:40:10 -- target/shutdown.sh@79 -- # waitforlisten 175747 /var/tmp/bdevperf.sock 00:20:37.383 02:40:10 -- common/autotest_common.sh@817 -- # '[' -z 175747 ']' 00:20:37.383 02:40:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.383 02:40:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.383 02:40:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.383 02:40:10 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:37.383 02:40:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.383 02:40:10 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:37.383 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:37.383 02:40:10 -- nvmf/common.sh@521 -- # config=() 00:20:37.383 02:40:10 -- nvmf/common.sh@521 -- # local subsystem config 00:20:37.383 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.383 { 00:20:37.383 "params": { 00:20:37.383 "name": "Nvme$subsystem", 00:20:37.383 "trtype": "$TEST_TRANSPORT", 00:20:37.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.383 "adrfam": "ipv4", 00:20:37.383 "trsvcid": "$NVMF_PORT", 00:20:37.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.383 "hdgst": ${hdgst:-false}, 00:20:37.383 "ddgst": ${ddgst:-false} 00:20:37.383 }, 00:20:37.383 "method": "bdev_nvme_attach_controller" 00:20:37.383 } 00:20:37.383 EOF 00:20:37.383 )") 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.383 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.383 { 00:20:37.383 "params": { 00:20:37.383 "name": "Nvme$subsystem", 00:20:37.383 "trtype": "$TEST_TRANSPORT", 00:20:37.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.383 "adrfam": "ipv4", 00:20:37.383 "trsvcid": "$NVMF_PORT", 00:20:37.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.383 "hdgst": ${hdgst:-false}, 00:20:37.383 "ddgst": ${ddgst:-false} 00:20:37.383 }, 00:20:37.383 "method": "bdev_nvme_attach_controller" 00:20:37.383 } 00:20:37.383 EOF 00:20:37.383 )") 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.383 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.383 { 00:20:37.383 "params": { 00:20:37.383 "name": "Nvme$subsystem", 00:20:37.383 "trtype": "$TEST_TRANSPORT", 00:20:37.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.383 "adrfam": "ipv4", 00:20:37.383 "trsvcid": "$NVMF_PORT", 00:20:37.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.383 "hdgst": ${hdgst:-false}, 00:20:37.383 "ddgst": ${ddgst:-false} 00:20:37.383 }, 00:20:37.383 "method": "bdev_nvme_attach_controller" 00:20:37.383 } 00:20:37.383 EOF 00:20:37.383 )") 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.383 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.383 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 [2024-04-27 02:40:10.859369] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:37.384 [2024-04-27 02:40:10.859422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:37.384 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.384 02:40:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:37.384 { 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme$subsystem", 00:20:37.384 "trtype": "$TEST_TRANSPORT", 00:20:37.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "$NVMF_PORT", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.384 "hdgst": ${hdgst:-false}, 00:20:37.384 "ddgst": ${ddgst:-false} 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 } 00:20:37.384 EOF 00:20:37.384 )") 00:20:37.384 02:40:10 -- nvmf/common.sh@543 -- # cat 00:20:37.384 02:40:10 -- nvmf/common.sh@545 -- # jq . 00:20:37.384 02:40:10 -- nvmf/common.sh@546 -- # IFS=, 00:20:37.384 02:40:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme1", 00:20:37.384 "trtype": "tcp", 00:20:37.384 "traddr": "10.0.0.2", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "4420", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.384 "hdgst": false, 00:20:37.384 "ddgst": false 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 },{ 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme2", 00:20:37.384 "trtype": "tcp", 00:20:37.384 "traddr": "10.0.0.2", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "4420", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.384 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.384 "hdgst": false, 00:20:37.384 "ddgst": false 00:20:37.384 }, 00:20:37.384 "method": "bdev_nvme_attach_controller" 00:20:37.384 },{ 00:20:37.384 "params": { 00:20:37.384 "name": "Nvme3", 00:20:37.384 "trtype": "tcp", 00:20:37.384 "traddr": "10.0.0.2", 00:20:37.384 "adrfam": "ipv4", 00:20:37.384 "trsvcid": "4420", 00:20:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme4", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme5", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme6", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme7", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme8", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme9", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 },{ 00:20:37.385 "params": { 00:20:37.385 "name": "Nvme10", 00:20:37.385 "trtype": "tcp", 00:20:37.385 "traddr": "10.0.0.2", 00:20:37.385 "adrfam": "ipv4", 00:20:37.385 "trsvcid": "4420", 00:20:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:37.385 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:37.385 "hdgst": false, 00:20:37.385 "ddgst": false 00:20:37.385 }, 00:20:37.385 "method": "bdev_nvme_attach_controller" 00:20:37.385 }' 00:20:37.385 [2024-04-27 02:40:10.919520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.385 [2024-04-27 02:40:10.982589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.769 02:40:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.769 02:40:12 -- common/autotest_common.sh@850 -- # return 0 00:20:38.769 02:40:12 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:38.769 02:40:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.769 02:40:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.769 02:40:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.769 02:40:12 -- target/shutdown.sh@83 -- # kill -9 175747 00:20:38.769 02:40:12 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:38.769 02:40:12 -- target/shutdown.sh@87 -- # sleep 1 00:20:39.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 175747 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:39.712 02:40:13 -- target/shutdown.sh@88 -- # kill -0 175469 00:20:39.712 02:40:13 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:39.712 02:40:13 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:39.712 02:40:13 -- nvmf/common.sh@521 -- # config=() 00:20:39.712 02:40:13 -- nvmf/common.sh@521 -- # local subsystem config 00:20:39.712 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.712 { 00:20:39.712 "params": { 00:20:39.712 "name": "Nvme$subsystem", 00:20:39.712 "trtype": "$TEST_TRANSPORT", 00:20:39.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.712 "adrfam": "ipv4", 00:20:39.712 "trsvcid": "$NVMF_PORT", 00:20:39.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.712 "hdgst": ${hdgst:-false}, 00:20:39.712 "ddgst": ${ddgst:-false} 00:20:39.712 }, 00:20:39.712 "method": "bdev_nvme_attach_controller" 00:20:39.712 } 00:20:39.712 EOF 00:20:39.712 )") 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.712 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.712 { 00:20:39.712 "params": { 00:20:39.712 "name": "Nvme$subsystem", 00:20:39.712 "trtype": "$TEST_TRANSPORT", 00:20:39.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.712 "adrfam": "ipv4", 00:20:39.712 "trsvcid": "$NVMF_PORT", 00:20:39.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.712 "hdgst": ${hdgst:-false}, 00:20:39.712 "ddgst": ${ddgst:-false} 00:20:39.712 }, 00:20:39.712 "method": "bdev_nvme_attach_controller" 00:20:39.712 } 00:20:39.712 EOF 00:20:39.712 )") 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.712 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.712 { 00:20:39.712 "params": { 00:20:39.712 "name": "Nvme$subsystem", 00:20:39.712 "trtype": "$TEST_TRANSPORT", 00:20:39.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.712 "adrfam": "ipv4", 00:20:39.712 "trsvcid": "$NVMF_PORT", 00:20:39.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.712 "hdgst": ${hdgst:-false}, 00:20:39.712 "ddgst": ${ddgst:-false} 00:20:39.712 }, 00:20:39.712 "method": "bdev_nvme_attach_controller" 00:20:39.712 } 00:20:39.712 EOF 00:20:39.712 )") 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.712 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.712 { 00:20:39.712 "params": { 00:20:39.712 "name": "Nvme$subsystem", 00:20:39.712 "trtype": "$TEST_TRANSPORT", 00:20:39.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.712 "adrfam": "ipv4", 00:20:39.712 "trsvcid": "$NVMF_PORT", 00:20:39.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.712 "hdgst": ${hdgst:-false}, 00:20:39.712 "ddgst": ${ddgst:-false} 00:20:39.712 }, 00:20:39.712 "method": "bdev_nvme_attach_controller" 00:20:39.712 } 00:20:39.712 EOF 00:20:39.712 )") 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.712 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.712 { 00:20:39.712 "params": { 00:20:39.712 "name": "Nvme$subsystem", 00:20:39.712 "trtype": "$TEST_TRANSPORT", 00:20:39.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.712 "adrfam": "ipv4", 00:20:39.712 "trsvcid": "$NVMF_PORT", 00:20:39.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.712 "hdgst": ${hdgst:-false}, 00:20:39.712 "ddgst": ${ddgst:-false} 00:20:39.712 }, 00:20:39.712 "method": "bdev_nvme_attach_controller" 00:20:39.712 } 00:20:39.712 EOF 00:20:39.712 )") 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.712 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.712 { 00:20:39.712 "params": { 00:20:39.712 "name": "Nvme$subsystem", 00:20:39.712 "trtype": "$TEST_TRANSPORT", 00:20:39.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.712 "adrfam": "ipv4", 00:20:39.712 "trsvcid": "$NVMF_PORT", 00:20:39.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.712 "hdgst": ${hdgst:-false}, 00:20:39.712 "ddgst": ${ddgst:-false} 00:20:39.712 }, 00:20:39.712 "method": "bdev_nvme_attach_controller" 00:20:39.712 } 00:20:39.712 EOF 00:20:39.712 )") 00:20:39.712 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.974 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.974 { 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme$subsystem", 00:20:39.974 "trtype": "$TEST_TRANSPORT", 00:20:39.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "$NVMF_PORT", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.974 "hdgst": ${hdgst:-false}, 00:20:39.974 "ddgst": ${ddgst:-false} 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 } 00:20:39.974 EOF 00:20:39.974 )") 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.974 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.974 { 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme$subsystem", 00:20:39.974 "trtype": "$TEST_TRANSPORT", 00:20:39.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "$NVMF_PORT", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.974 "hdgst": ${hdgst:-false}, 00:20:39.974 "ddgst": ${ddgst:-false} 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 } 00:20:39.974 EOF 00:20:39.974 )") 00:20:39.974 [2024-04-27 02:40:13.347058] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:39.974 [2024-04-27 02:40:13.347121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176229 ] 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.974 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.974 { 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme$subsystem", 00:20:39.974 "trtype": "$TEST_TRANSPORT", 00:20:39.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "$NVMF_PORT", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.974 "hdgst": ${hdgst:-false}, 00:20:39.974 "ddgst": ${ddgst:-false} 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 } 00:20:39.974 EOF 00:20:39.974 )") 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.974 02:40:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:39.974 { 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme$subsystem", 00:20:39.974 "trtype": "$TEST_TRANSPORT", 00:20:39.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "$NVMF_PORT", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.974 "hdgst": ${hdgst:-false}, 00:20:39.974 "ddgst": ${ddgst:-false} 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 } 00:20:39.974 EOF 00:20:39.974 )") 00:20:39.974 02:40:13 -- nvmf/common.sh@543 -- # cat 00:20:39.974 02:40:13 -- nvmf/common.sh@545 -- # jq . 00:20:39.974 02:40:13 -- nvmf/common.sh@546 -- # IFS=, 00:20:39.974 02:40:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme1", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme2", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme3", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme4", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme5", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme6", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme7", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme8", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme9", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 },{ 00:20:39.974 "params": { 00:20:39.974 "name": "Nvme10", 00:20:39.974 "trtype": "tcp", 00:20:39.974 "traddr": "10.0.0.2", 00:20:39.974 "adrfam": "ipv4", 00:20:39.974 "trsvcid": "4420", 00:20:39.974 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:39.974 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:39.974 "hdgst": false, 00:20:39.974 "ddgst": false 00:20:39.974 }, 00:20:39.974 "method": "bdev_nvme_attach_controller" 00:20:39.974 }' 00:20:39.974 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.974 [2024-04-27 02:40:13.406884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.974 [2024-04-27 02:40:13.469332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.359 Running I/O for 1 seconds... 00:20:42.744 00:20:42.744 Latency(us) 00:20:42.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.745 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme1n1 : 1.10 231.78 14.49 0.00 0.00 273160.11 23156.05 248162.99 00:20:42.745 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme2n1 : 1.15 223.06 13.94 0.00 0.00 278909.65 23811.41 260396.37 00:20:42.745 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme3n1 : 1.13 169.39 10.59 0.00 0.00 361219.98 24139.09 325058.56 00:20:42.745 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme4n1 : 1.20 212.95 13.31 0.00 0.00 273170.56 24903.68 279620.27 00:20:42.745 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme5n1 : 1.20 266.73 16.67 0.00 0.00 221955.75 23374.51 246415.36 00:20:42.745 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme6n1 : 1.11 292.83 18.30 0.00 0.00 196745.46 6608.21 223696.21 00:20:42.745 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme7n1 : 1.20 265.93 16.62 0.00 0.00 214883.50 13981.01 253405.87 00:20:42.745 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme8n1 : 1.15 222.09 13.88 0.00 0.00 251290.03 25668.27 249910.61 00:20:42.745 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme9n1 : 1.17 219.28 13.71 0.00 0.00 249561.17 24248.32 274377.39 00:20:42.745 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.745 Verification LBA range: start 0x0 length 0x400 00:20:42.745 Nvme10n1 : 1.23 258.84 16.18 0.00 0.00 209891.11 12943.36 270882.13 00:20:42.745 =================================================================================================================== 00:20:42.745 Total : 2362.89 147.68 0.00 0.00 246578.23 6608.21 325058.56 00:20:42.745 02:40:16 -- target/shutdown.sh@94 -- # stoptarget 00:20:42.745 02:40:16 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:42.745 02:40:16 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:42.745 02:40:16 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.745 02:40:16 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:42.745 02:40:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:42.745 02:40:16 -- nvmf/common.sh@117 -- # sync 00:20:42.745 02:40:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.745 02:40:16 -- nvmf/common.sh@120 -- # set +e 00:20:42.745 02:40:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.745 02:40:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.745 rmmod nvme_tcp 00:20:42.745 rmmod nvme_fabrics 00:20:42.745 rmmod nvme_keyring 00:20:42.745 02:40:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.745 02:40:16 -- nvmf/common.sh@124 -- # set -e 00:20:42.745 02:40:16 -- nvmf/common.sh@125 -- # return 0 00:20:42.745 02:40:16 -- nvmf/common.sh@478 -- # '[' -n 175469 ']' 00:20:42.745 02:40:16 -- nvmf/common.sh@479 -- # killprocess 175469 00:20:42.745 02:40:16 -- common/autotest_common.sh@936 -- # '[' -z 175469 ']' 00:20:42.745 02:40:16 -- common/autotest_common.sh@940 -- # kill -0 175469 00:20:42.745 02:40:16 -- common/autotest_common.sh@941 -- # uname 00:20:42.745 02:40:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.745 02:40:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 175469 00:20:42.745 02:40:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:42.745 02:40:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:42.745 02:40:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 175469' 00:20:42.745 killing process with pid 175469 00:20:42.745 02:40:16 -- common/autotest_common.sh@955 -- # kill 175469 00:20:42.745 02:40:16 -- common/autotest_common.sh@960 -- # wait 175469 00:20:43.006 02:40:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:43.006 02:40:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:43.006 02:40:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:43.006 02:40:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.006 02:40:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.006 02:40:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.006 02:40:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.006 02:40:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.553 02:40:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.553 00:20:45.553 real 0m16.178s 00:20:45.553 user 0m33.201s 00:20:45.553 sys 0m6.407s 00:20:45.553 02:40:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:45.553 02:40:18 -- common/autotest_common.sh@10 -- # set +x 00:20:45.553 ************************************ 00:20:45.553 END TEST nvmf_shutdown_tc1 00:20:45.553 ************************************ 00:20:45.553 02:40:18 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:45.553 02:40:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:45.553 02:40:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:45.553 02:40:18 -- common/autotest_common.sh@10 -- # set +x 00:20:45.553 ************************************ 00:20:45.553 START TEST nvmf_shutdown_tc2 00:20:45.553 ************************************ 00:20:45.553 02:40:18 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:45.553 02:40:18 -- target/shutdown.sh@99 -- # starttarget 00:20:45.553 02:40:18 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:45.553 02:40:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:45.553 02:40:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.553 02:40:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:45.553 02:40:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:45.554 02:40:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:45.554 02:40:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.554 02:40:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.554 02:40:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.554 02:40:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:45.554 02:40:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.554 02:40:18 -- common/autotest_common.sh@10 -- # set +x 00:20:45.554 02:40:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:45.554 02:40:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.554 02:40:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.554 02:40:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.554 02:40:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.554 02:40:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.554 02:40:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.554 02:40:18 -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.554 02:40:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.554 02:40:18 -- nvmf/common.sh@296 -- # e810=() 00:20:45.554 02:40:18 -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.554 02:40:18 -- nvmf/common.sh@297 -- # x722=() 00:20:45.554 02:40:18 -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.554 02:40:18 -- nvmf/common.sh@298 -- # mlx=() 00:20:45.554 02:40:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.554 02:40:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.554 02:40:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.554 02:40:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.554 02:40:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.554 02:40:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.554 02:40:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:45.554 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:45.554 02:40:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.554 02:40:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:45.554 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:45.554 02:40:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.554 02:40:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.554 02:40:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.554 02:40:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:45.554 02:40:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.554 02:40:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:45.554 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:45.554 02:40:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.554 02:40:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.554 02:40:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.554 02:40:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:45.554 02:40:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.554 02:40:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:45.554 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:45.554 02:40:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.554 02:40:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:45.554 02:40:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:45.554 02:40:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:45.554 02:40:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:45.554 02:40:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.554 02:40:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.554 02:40:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.554 02:40:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.554 02:40:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.554 02:40:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.554 02:40:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.554 02:40:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.554 02:40:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.554 02:40:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.554 02:40:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.554 02:40:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.554 02:40:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.554 02:40:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.554 02:40:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.554 02:40:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.554 02:40:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.554 02:40:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.554 02:40:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.554 02:40:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:20:45.554 00:20:45.554 --- 10.0.0.2 ping statistics --- 00:20:45.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.554 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:20:45.554 02:40:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:20:45.554 00:20:45.554 --- 10.0.0.1 ping statistics --- 00:20:45.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.554 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:20:45.554 02:40:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.554 02:40:19 -- nvmf/common.sh@411 -- # return 0 00:20:45.554 02:40:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:45.554 02:40:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.554 02:40:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:45.554 02:40:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:45.554 02:40:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.554 02:40:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:45.554 02:40:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:45.554 02:40:19 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:45.554 02:40:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:45.554 02:40:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:45.554 02:40:19 -- common/autotest_common.sh@10 -- # set +x 00:20:45.554 02:40:19 -- nvmf/common.sh@470 -- # nvmfpid=177459 00:20:45.554 02:40:19 -- nvmf/common.sh@471 -- # waitforlisten 177459 00:20:45.555 02:40:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:45.555 02:40:19 -- common/autotest_common.sh@817 -- # '[' -z 177459 ']' 00:20:45.555 02:40:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.555 02:40:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:45.555 02:40:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.555 02:40:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:45.555 02:40:19 -- common/autotest_common.sh@10 -- # set +x 00:20:45.815 [2024-04-27 02:40:19.217849] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:45.816 [2024-04-27 02:40:19.217942] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.816 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.816 [2024-04-27 02:40:19.291282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.816 [2024-04-27 02:40:19.363629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.816 [2024-04-27 02:40:19.363670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.816 [2024-04-27 02:40:19.363680] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.816 [2024-04-27 02:40:19.363688] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.816 [2024-04-27 02:40:19.363696] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.816 [2024-04-27 02:40:19.363804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.816 [2024-04-27 02:40:19.363931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.816 [2024-04-27 02:40:19.364087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.816 [2024-04-27 02:40:19.364088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.389 02:40:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:46.389 02:40:19 -- common/autotest_common.sh@850 -- # return 0 00:20:46.389 02:40:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:46.389 02:40:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:46.389 02:40:19 -- common/autotest_common.sh@10 -- # set +x 00:20:46.649 02:40:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.649 02:40:20 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.649 02:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.649 02:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:46.649 [2024-04-27 02:40:20.026826] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.649 02:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.649 02:40:20 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:46.649 02:40:20 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:46.649 02:40:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:46.649 02:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:46.649 02:40:20 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:46.649 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.649 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.649 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.649 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.649 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.649 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.650 02:40:20 -- target/shutdown.sh@28 -- # cat 00:20:46.650 02:40:20 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:46.650 02:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.650 02:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:46.650 Malloc1 00:20:46.650 [2024-04-27 02:40:20.123589] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.650 Malloc2 00:20:46.650 Malloc3 00:20:46.650 Malloc4 00:20:46.650 Malloc5 00:20:46.911 Malloc6 00:20:46.911 Malloc7 00:20:46.911 Malloc8 00:20:46.911 Malloc9 00:20:46.911 Malloc10 00:20:46.911 02:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.911 02:40:20 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:46.911 02:40:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:46.911 02:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:46.911 02:40:20 -- target/shutdown.sh@103 -- # perfpid=177733 00:20:46.911 02:40:20 -- target/shutdown.sh@104 -- # waitforlisten 177733 /var/tmp/bdevperf.sock 00:20:46.911 02:40:20 -- common/autotest_common.sh@817 -- # '[' -z 177733 ']' 00:20:46.911 02:40:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.911 02:40:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:46.911 02:40:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.911 02:40:20 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:46.911 02:40:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:46.911 02:40:20 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.911 02:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:46.911 02:40:20 -- nvmf/common.sh@521 -- # config=() 00:20:46.911 02:40:20 -- nvmf/common.sh@521 -- # local subsystem config 00:20:46.911 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:46.911 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:46.911 { 00:20:46.911 "params": { 00:20:46.911 "name": "Nvme$subsystem", 00:20:46.911 "trtype": "$TEST_TRANSPORT", 00:20:46.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.911 "adrfam": "ipv4", 00:20:46.911 "trsvcid": "$NVMF_PORT", 00:20:46.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.911 "hdgst": ${hdgst:-false}, 00:20:46.911 "ddgst": ${ddgst:-false} 00:20:46.911 }, 00:20:46.911 "method": "bdev_nvme_attach_controller" 00:20:46.911 } 00:20:46.911 EOF 00:20:46.911 )") 00:20:46.911 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.173 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.173 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.173 { 00:20:47.173 "params": { 00:20:47.173 "name": "Nvme$subsystem", 00:20:47.173 "trtype": "$TEST_TRANSPORT", 00:20:47.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.173 "adrfam": "ipv4", 00:20:47.173 "trsvcid": "$NVMF_PORT", 00:20:47.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.173 "hdgst": ${hdgst:-false}, 00:20:47.173 "ddgst": ${ddgst:-false} 00:20:47.173 }, 00:20:47.173 "method": "bdev_nvme_attach_controller" 00:20:47.173 } 00:20:47.173 EOF 00:20:47.173 )") 00:20:47.173 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.173 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.173 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.173 { 00:20:47.173 "params": { 00:20:47.173 "name": "Nvme$subsystem", 00:20:47.173 "trtype": "$TEST_TRANSPORT", 00:20:47.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.173 "adrfam": "ipv4", 00:20:47.173 "trsvcid": "$NVMF_PORT", 00:20:47.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 [2024-04-27 02:40:20.580564] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:47.174 [2024-04-27 02:40:20.580635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177733 ] 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.174 { 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme$subsystem", 00:20:47.174 "trtype": "$TEST_TRANSPORT", 00:20:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "$NVMF_PORT", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.174 "hdgst": ${hdgst:-false}, 00:20:47.174 "ddgst": ${ddgst:-false} 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 } 00:20:47.174 EOF 00:20:47.174 )") 00:20:47.174 02:40:20 -- nvmf/common.sh@543 -- # cat 00:20:47.174 02:40:20 -- nvmf/common.sh@545 -- # jq . 00:20:47.174 02:40:20 -- nvmf/common.sh@546 -- # IFS=, 00:20:47.174 02:40:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme1", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme2", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme3", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme4", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme5", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme6", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.174 "name": "Nvme7", 00:20:47.174 "trtype": "tcp", 00:20:47.174 "traddr": "10.0.0.2", 00:20:47.174 "adrfam": "ipv4", 00:20:47.174 "trsvcid": "4420", 00:20:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.174 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.174 "hdgst": false, 00:20:47.174 "ddgst": false 00:20:47.174 }, 00:20:47.174 "method": "bdev_nvme_attach_controller" 00:20:47.174 },{ 00:20:47.174 "params": { 00:20:47.175 "name": "Nvme8", 00:20:47.175 "trtype": "tcp", 00:20:47.175 "traddr": "10.0.0.2", 00:20:47.175 "adrfam": "ipv4", 00:20:47.175 "trsvcid": "4420", 00:20:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.175 "hdgst": false, 00:20:47.175 "ddgst": false 00:20:47.175 }, 00:20:47.175 "method": "bdev_nvme_attach_controller" 00:20:47.175 },{ 00:20:47.175 "params": { 00:20:47.175 "name": "Nvme9", 00:20:47.175 "trtype": "tcp", 00:20:47.175 "traddr": "10.0.0.2", 00:20:47.175 "adrfam": "ipv4", 00:20:47.175 "trsvcid": "4420", 00:20:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.175 "hdgst": false, 00:20:47.175 "ddgst": false 00:20:47.175 }, 00:20:47.175 "method": "bdev_nvme_attach_controller" 00:20:47.175 },{ 00:20:47.175 "params": { 00:20:47.175 "name": "Nvme10", 00:20:47.175 "trtype": "tcp", 00:20:47.175 "traddr": "10.0.0.2", 00:20:47.175 "adrfam": "ipv4", 00:20:47.175 "trsvcid": "4420", 00:20:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.175 "hdgst": false, 00:20:47.175 "ddgst": false 00:20:47.175 }, 00:20:47.175 "method": "bdev_nvme_attach_controller" 00:20:47.175 }' 00:20:47.175 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.175 [2024-04-27 02:40:20.640871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.175 [2024-04-27 02:40:20.703555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.560 Running I/O for 10 seconds... 00:20:48.560 02:40:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.560 02:40:22 -- common/autotest_common.sh@850 -- # return 0 00:20:48.560 02:40:22 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.560 02:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.560 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:20:48.821 02:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.821 02:40:22 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:48.821 02:40:22 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:48.821 02:40:22 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:48.821 02:40:22 -- target/shutdown.sh@57 -- # local ret=1 00:20:48.821 02:40:22 -- target/shutdown.sh@58 -- # local i 00:20:48.821 02:40:22 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:48.821 02:40:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:48.821 02:40:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.821 02:40:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.821 02:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.821 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:20:48.821 02:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.821 02:40:22 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:48.821 02:40:22 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:48.821 02:40:22 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:49.082 02:40:22 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:49.082 02:40:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:49.082 02:40:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.082 02:40:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.083 02:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.083 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:20:49.083 02:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.083 02:40:22 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:49.083 02:40:22 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:49.083 02:40:22 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:49.343 02:40:22 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:49.343 02:40:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:49.343 02:40:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.343 02:40:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.343 02:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.343 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:20:49.343 02:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.343 02:40:22 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:49.343 02:40:22 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:49.343 02:40:22 -- target/shutdown.sh@64 -- # ret=0 00:20:49.343 02:40:22 -- target/shutdown.sh@65 -- # break 00:20:49.343 02:40:22 -- target/shutdown.sh@69 -- # return 0 00:20:49.343 02:40:22 -- target/shutdown.sh@110 -- # killprocess 177733 00:20:49.343 02:40:22 -- common/autotest_common.sh@936 -- # '[' -z 177733 ']' 00:20:49.343 02:40:22 -- common/autotest_common.sh@940 -- # kill -0 177733 00:20:49.343 02:40:22 -- common/autotest_common.sh@941 -- # uname 00:20:49.343 02:40:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:49.343 02:40:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 177733 00:20:49.603 02:40:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:49.603 02:40:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:49.603 02:40:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 177733' 00:20:49.603 killing process with pid 177733 00:20:49.603 02:40:23 -- common/autotest_common.sh@955 -- # kill 177733 00:20:49.603 02:40:23 -- common/autotest_common.sh@960 -- # wait 177733 00:20:49.603 Received shutdown signal, test time was about 0.974307 seconds 00:20:49.603 00:20:49.603 Latency(us) 00:20:49.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.603 Verification LBA range: start 0x0 length 0x400 00:20:49.603 Nvme1n1 : 0.92 208.95 13.06 0.00 0.00 302007.18 23811.41 263891.63 00:20:49.603 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.603 Verification LBA range: start 0x0 length 0x400 00:20:49.603 Nvme2n1 : 0.93 275.44 17.22 0.00 0.00 224564.27 20534.61 210589.01 00:20:49.603 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.603 Verification LBA range: start 0x0 length 0x400 00:20:49.603 Nvme3n1 : 0.97 263.86 16.49 0.00 0.00 229993.60 23702.19 258648.75 00:20:49.603 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.603 Verification LBA range: start 0x0 length 0x400 00:20:49.603 Nvme4n1 : 0.95 278.08 17.38 0.00 0.00 212527.09 3044.69 237677.23 00:20:49.603 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.603 Verification LBA range: start 0x0 length 0x400 00:20:49.604 Nvme5n1 : 0.97 263.68 16.48 0.00 0.00 220163.91 16274.77 239424.85 00:20:49.604 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.604 Verification LBA range: start 0x0 length 0x400 00:20:49.604 Nvme6n1 : 0.94 205.04 12.82 0.00 0.00 276052.76 24357.55 269134.51 00:20:49.604 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.604 Verification LBA range: start 0x0 length 0x400 00:20:49.604 Nvme7n1 : 0.97 197.24 12.33 0.00 0.00 282007.04 19223.89 316320.43 00:20:49.604 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.604 Verification LBA range: start 0x0 length 0x400 00:20:49.604 Nvme8n1 : 0.93 206.20 12.89 0.00 0.00 260504.75 24357.55 241172.48 00:20:49.604 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.604 Verification LBA range: start 0x0 length 0x400 00:20:49.604 Nvme9n1 : 0.95 202.27 12.64 0.00 0.00 260898.13 25777.49 281367.89 00:20:49.604 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.604 Verification LBA range: start 0x0 length 0x400 00:20:49.604 Nvme10n1 : 0.92 139.01 8.69 0.00 0.00 367180.80 31457.28 354768.21 00:20:49.604 =================================================================================================================== 00:20:49.604 Total : 2239.78 139.99 0.00 0.00 255261.85 3044.69 354768.21 00:20:49.863 02:40:23 -- target/shutdown.sh@113 -- # sleep 1 00:20:50.805 02:40:24 -- target/shutdown.sh@114 -- # kill -0 177459 00:20:50.805 02:40:24 -- target/shutdown.sh@116 -- # stoptarget 00:20:50.805 02:40:24 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:50.805 02:40:24 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:50.805 02:40:24 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.805 02:40:24 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:50.805 02:40:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:50.805 02:40:24 -- nvmf/common.sh@117 -- # sync 00:20:50.805 02:40:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.805 02:40:24 -- nvmf/common.sh@120 -- # set +e 00:20:50.805 02:40:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.805 02:40:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.805 rmmod nvme_tcp 00:20:50.805 rmmod nvme_fabrics 00:20:50.805 rmmod nvme_keyring 00:20:50.805 02:40:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.805 02:40:24 -- nvmf/common.sh@124 -- # set -e 00:20:50.805 02:40:24 -- nvmf/common.sh@125 -- # return 0 00:20:50.805 02:40:24 -- nvmf/common.sh@478 -- # '[' -n 177459 ']' 00:20:50.805 02:40:24 -- nvmf/common.sh@479 -- # killprocess 177459 00:20:50.805 02:40:24 -- common/autotest_common.sh@936 -- # '[' -z 177459 ']' 00:20:50.805 02:40:24 -- common/autotest_common.sh@940 -- # kill -0 177459 00:20:50.805 02:40:24 -- common/autotest_common.sh@941 -- # uname 00:20:50.805 02:40:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.805 02:40:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 177459 00:20:50.805 02:40:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:50.805 02:40:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:50.805 02:40:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 177459' 00:20:50.805 killing process with pid 177459 00:20:50.805 02:40:24 -- common/autotest_common.sh@955 -- # kill 177459 00:20:50.805 02:40:24 -- common/autotest_common.sh@960 -- # wait 177459 00:20:51.065 02:40:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:51.065 02:40:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:51.065 02:40:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:51.065 02:40:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.065 02:40:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.065 02:40:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.065 02:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.065 02:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.613 02:40:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.613 00:20:53.613 real 0m7.923s 00:20:53.613 user 0m23.794s 00:20:53.613 sys 0m1.284s 00:20:53.613 02:40:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:53.613 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 ************************************ 00:20:53.613 END TEST nvmf_shutdown_tc2 00:20:53.613 ************************************ 00:20:53.613 02:40:26 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:53.613 02:40:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:53.613 02:40:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.613 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 ************************************ 00:20:53.613 START TEST nvmf_shutdown_tc3 00:20:53.613 ************************************ 00:20:53.613 02:40:26 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:53.613 02:40:26 -- target/shutdown.sh@121 -- # starttarget 00:20:53.613 02:40:26 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:53.613 02:40:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:53.613 02:40:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.613 02:40:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:53.613 02:40:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:53.613 02:40:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:53.613 02:40:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.613 02:40:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.613 02:40:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.613 02:40:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:53.613 02:40:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.613 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 02:40:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:53.613 02:40:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.613 02:40:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.613 02:40:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.613 02:40:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.613 02:40:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.613 02:40:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.613 02:40:26 -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.613 02:40:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.613 02:40:26 -- nvmf/common.sh@296 -- # e810=() 00:20:53.613 02:40:26 -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.613 02:40:26 -- nvmf/common.sh@297 -- # x722=() 00:20:53.613 02:40:26 -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.613 02:40:26 -- nvmf/common.sh@298 -- # mlx=() 00:20:53.613 02:40:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.613 02:40:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.613 02:40:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.613 02:40:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.613 02:40:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.613 02:40:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.613 02:40:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:53.613 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:53.613 02:40:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.613 02:40:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:53.613 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:53.613 02:40:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.613 02:40:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.613 02:40:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.613 02:40:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:53.613 02:40:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.613 02:40:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:53.613 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:53.613 02:40:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.613 02:40:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.613 02:40:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.613 02:40:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:53.613 02:40:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.613 02:40:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:53.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:53.613 02:40:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.613 02:40:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:53.613 02:40:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:53.613 02:40:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:53.613 02:40:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:53.613 02:40:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.613 02:40:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.613 02:40:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.613 02:40:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.613 02:40:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.613 02:40:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.613 02:40:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.613 02:40:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.613 02:40:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.614 02:40:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.614 02:40:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.614 02:40:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.614 02:40:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.614 02:40:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.614 02:40:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.614 02:40:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.614 02:40:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.614 02:40:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.614 02:40:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.614 02:40:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:20:53.614 00:20:53.614 --- 10.0.0.2 ping statistics --- 00:20:53.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.614 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:20:53.614 02:40:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.554 ms 00:20:53.875 00:20:53.875 --- 10.0.0.1 ping statistics --- 00:20:53.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.875 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:20:53.875 02:40:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.875 02:40:27 -- nvmf/common.sh@411 -- # return 0 00:20:53.875 02:40:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:53.875 02:40:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.875 02:40:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:53.875 02:40:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:53.875 02:40:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.875 02:40:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:53.875 02:40:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:53.875 02:40:27 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:53.875 02:40:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:53.875 02:40:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:53.875 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:20:53.875 02:40:27 -- nvmf/common.sh@470 -- # nvmfpid=179195 00:20:53.875 02:40:27 -- nvmf/common.sh@471 -- # waitforlisten 179195 00:20:53.875 02:40:27 -- common/autotest_common.sh@817 -- # '[' -z 179195 ']' 00:20:53.875 02:40:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.875 02:40:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.875 02:40:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.875 02:40:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.875 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:20:53.875 02:40:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:53.875 [2024-04-27 02:40:27.338742] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:53.875 [2024-04-27 02:40:27.338806] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.875 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.875 [2024-04-27 02:40:27.410534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.875 [2024-04-27 02:40:27.482548] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.875 [2024-04-27 02:40:27.482587] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.875 [2024-04-27 02:40:27.482596] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.875 [2024-04-27 02:40:27.482604] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.875 [2024-04-27 02:40:27.482611] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.875 [2024-04-27 02:40:27.482724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.875 [2024-04-27 02:40:27.482851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.875 [2024-04-27 02:40:27.483007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.875 [2024-04-27 02:40:27.483008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:54.819 02:40:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:54.819 02:40:28 -- common/autotest_common.sh@850 -- # return 0 00:20:54.819 02:40:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:54.819 02:40:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:54.819 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:54.819 02:40:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.819 02:40:28 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.819 02:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.819 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:54.819 [2024-04-27 02:40:28.145953] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.819 02:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.819 02:40:28 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:54.819 02:40:28 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:54.819 02:40:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:54.819 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:54.819 02:40:28 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.819 02:40:28 -- target/shutdown.sh@28 -- # cat 00:20:54.819 02:40:28 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:54.819 02:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.819 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:54.819 Malloc1 00:20:54.819 [2024-04-27 02:40:28.246604] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.819 Malloc2 00:20:54.819 Malloc3 00:20:54.819 Malloc4 00:20:54.819 Malloc5 00:20:54.819 Malloc6 00:20:55.081 Malloc7 00:20:55.081 Malloc8 00:20:55.081 Malloc9 00:20:55.081 Malloc10 00:20:55.081 02:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.081 02:40:28 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:55.081 02:40:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:55.081 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:55.081 02:40:28 -- target/shutdown.sh@125 -- # perfpid=179584 00:20:55.081 02:40:28 -- target/shutdown.sh@126 -- # waitforlisten 179584 /var/tmp/bdevperf.sock 00:20:55.081 02:40:28 -- common/autotest_common.sh@817 -- # '[' -z 179584 ']' 00:20:55.081 02:40:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.081 02:40:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.081 02:40:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.081 02:40:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.081 02:40:28 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:55.081 02:40:28 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:55.081 02:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:55.081 02:40:28 -- nvmf/common.sh@521 -- # config=() 00:20:55.081 02:40:28 -- nvmf/common.sh@521 -- # local subsystem config 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 [2024-04-27 02:40:28.688181] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:55.081 [2024-04-27 02:40:28.688235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179584 ] 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.081 { 00:20:55.081 "params": { 00:20:55.081 "name": "Nvme$subsystem", 00:20:55.081 "trtype": "$TEST_TRANSPORT", 00:20:55.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.081 "adrfam": "ipv4", 00:20:55.081 "trsvcid": "$NVMF_PORT", 00:20:55.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.081 "hdgst": ${hdgst:-false}, 00:20:55.081 "ddgst": ${ddgst:-false} 00:20:55.081 }, 00:20:55.081 "method": "bdev_nvme_attach_controller" 00:20:55.081 } 00:20:55.081 EOF 00:20:55.081 )") 00:20:55.081 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.081 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.343 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.343 { 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme$subsystem", 00:20:55.343 "trtype": "$TEST_TRANSPORT", 00:20:55.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "$NVMF_PORT", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.343 "hdgst": ${hdgst:-false}, 00:20:55.343 "ddgst": ${ddgst:-false} 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 } 00:20:55.343 EOF 00:20:55.343 )") 00:20:55.343 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.343 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.343 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.343 { 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme$subsystem", 00:20:55.343 "trtype": "$TEST_TRANSPORT", 00:20:55.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "$NVMF_PORT", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.343 "hdgst": ${hdgst:-false}, 00:20:55.343 "ddgst": ${ddgst:-false} 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 } 00:20:55.343 EOF 00:20:55.343 )") 00:20:55.343 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.343 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.343 02:40:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:55.343 02:40:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:55.343 { 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme$subsystem", 00:20:55.343 "trtype": "$TEST_TRANSPORT", 00:20:55.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "$NVMF_PORT", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.343 "hdgst": ${hdgst:-false}, 00:20:55.343 "ddgst": ${ddgst:-false} 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 } 00:20:55.343 EOF 00:20:55.343 )") 00:20:55.343 02:40:28 -- nvmf/common.sh@543 -- # cat 00:20:55.343 02:40:28 -- nvmf/common.sh@545 -- # jq . 00:20:55.343 02:40:28 -- nvmf/common.sh@546 -- # IFS=, 00:20:55.343 02:40:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme1", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme2", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme3", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme4", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme5", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme6", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme7", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.343 "params": { 00:20:55.343 "name": "Nvme8", 00:20:55.343 "trtype": "tcp", 00:20:55.343 "traddr": "10.0.0.2", 00:20:55.343 "adrfam": "ipv4", 00:20:55.343 "trsvcid": "4420", 00:20:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:55.343 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:55.343 "hdgst": false, 00:20:55.343 "ddgst": false 00:20:55.343 }, 00:20:55.343 "method": "bdev_nvme_attach_controller" 00:20:55.343 },{ 00:20:55.344 "params": { 00:20:55.344 "name": "Nvme9", 00:20:55.344 "trtype": "tcp", 00:20:55.344 "traddr": "10.0.0.2", 00:20:55.344 "adrfam": "ipv4", 00:20:55.344 "trsvcid": "4420", 00:20:55.344 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:55.344 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:55.344 "hdgst": false, 00:20:55.344 "ddgst": false 00:20:55.344 }, 00:20:55.344 "method": "bdev_nvme_attach_controller" 00:20:55.344 },{ 00:20:55.344 "params": { 00:20:55.344 "name": "Nvme10", 00:20:55.344 "trtype": "tcp", 00:20:55.344 "traddr": "10.0.0.2", 00:20:55.344 "adrfam": "ipv4", 00:20:55.344 "trsvcid": "4420", 00:20:55.344 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:55.344 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:55.344 "hdgst": false, 00:20:55.344 "ddgst": false 00:20:55.344 }, 00:20:55.344 "method": "bdev_nvme_attach_controller" 00:20:55.344 }' 00:20:55.344 [2024-04-27 02:40:28.748409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.344 [2024-04-27 02:40:28.811299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.729 Running I/O for 10 seconds... 00:20:56.729 02:40:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.729 02:40:30 -- common/autotest_common.sh@850 -- # return 0 00:20:56.729 02:40:30 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:56.729 02:40:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.729 02:40:30 -- common/autotest_common.sh@10 -- # set +x 00:20:56.990 02:40:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.990 02:40:30 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.990 02:40:30 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:56.990 02:40:30 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:56.990 02:40:30 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:56.990 02:40:30 -- target/shutdown.sh@57 -- # local ret=1 00:20:56.990 02:40:30 -- target/shutdown.sh@58 -- # local i 00:20:56.990 02:40:30 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:56.990 02:40:30 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:56.990 02:40:30 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.990 02:40:30 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.990 02:40:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.990 02:40:30 -- common/autotest_common.sh@10 -- # set +x 00:20:56.990 02:40:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.990 02:40:30 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:56.990 02:40:30 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:56.990 02:40:30 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:57.251 02:40:30 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:57.251 02:40:30 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.251 02:40:30 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.251 02:40:30 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.251 02:40:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.251 02:40:30 -- common/autotest_common.sh@10 -- # set +x 00:20:57.251 02:40:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.251 02:40:30 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:57.251 02:40:30 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:57.251 02:40:30 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:57.512 02:40:31 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:57.512 02:40:31 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.512 02:40:31 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.512 02:40:31 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.512 02:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.512 02:40:31 -- common/autotest_common.sh@10 -- # set +x 00:20:57.789 02:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.789 02:40:31 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:57.789 02:40:31 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:57.789 02:40:31 -- target/shutdown.sh@64 -- # ret=0 00:20:57.789 02:40:31 -- target/shutdown.sh@65 -- # break 00:20:57.789 02:40:31 -- target/shutdown.sh@69 -- # return 0 00:20:57.789 02:40:31 -- target/shutdown.sh@135 -- # killprocess 179195 00:20:57.789 02:40:31 -- common/autotest_common.sh@936 -- # '[' -z 179195 ']' 00:20:57.789 02:40:31 -- common/autotest_common.sh@940 -- # kill -0 179195 00:20:57.789 02:40:31 -- common/autotest_common.sh@941 -- # uname 00:20:57.789 02:40:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.789 02:40:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 179195 00:20:57.789 02:40:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:57.789 02:40:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:57.789 02:40:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 179195' 00:20:57.789 killing process with pid 179195 00:20:57.789 02:40:31 -- common/autotest_common.sh@955 -- # kill 179195 00:20:57.789 02:40:31 -- common/autotest_common.sh@960 -- # wait 179195 00:20:57.789 [2024-04-27 02:40:31.220817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c3ef0 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.221992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.222000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.789 [2024-04-27 02:40:31.222006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.222012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.222018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6820 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.225419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4ca0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.790 [2024-04-27 02:40:31.226646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.226925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c55e0 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.791 [2024-04-27 02:40:31.227993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.227997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.228105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5a70 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.792 [2024-04-27 02:40:31.229480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6390 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbadd0 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd880 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdf0 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.229969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b9eb0 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.229990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.229999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f125f0 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.230076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aba370 is same with the state(5) to be set 00:20:57.793 [2024-04-27 02:40:31.230156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.793 [2024-04-27 02:40:31.230179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.793 [2024-04-27 02:40:31.230187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb62c0 is same with the state(5) to be set 00:20:57.794 [2024-04-27 02:40:31.230240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1ad10 is same with the state(5) to be set 00:20:57.794 [2024-04-27 02:40:31.230334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1eb10 is same with the state(5) to be set 00:20:57.794 [2024-04-27 02:40:31.230415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.794 [2024-04-27 02:40:31.230467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.230474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b8ec0 is same with the state(5) to be set 00:20:57.794 [2024-04-27 02:40:31.231320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.794 [2024-04-27 02:40:31.231739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.794 [2024-04-27 02:40:31.231746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.231982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.231991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.795 [2024-04-27 02:40:31.232401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.795 [2024-04-27 02:40:31.232452] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x203e300 was disconnected and freed. reset controller. 00:20:57.796 [2024-04-27 02:40:31.232483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.232970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.232977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.244993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.796 [2024-04-27 02:40:31.245189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.796 [2024-04-27 02:40:31.245196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245691] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f9bd90 was disconnected and freed. reset controller. 00:20:57.797 [2024-04-27 02:40:31.245739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.245988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.797 [2024-04-27 02:40:31.245996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.797 [2024-04-27 02:40:31.246005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.798 [2024-04-27 02:40:31.246590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.798 [2024-04-27 02:40:31.246600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.246849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.246912] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f9d1a0 was disconnected and freed. reset controller. 00:20:57.799 [2024-04-27 02:40:31.246992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.799 [2024-04-27 02:40:31.247408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.799 [2024-04-27 02:40:31.247417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.247799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.247808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.800 [2024-04-27 02:40:31.253504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.800 [2024-04-27 02:40:31.253574] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f9e600 was disconnected and freed. reset controller. 00:20:57.801 [2024-04-27 02:40:31.253786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.253991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.253998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.801 [2024-04-27 02:40:31.254462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.801 [2024-04-27 02:40:31.254471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.254888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.254941] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eea1e0 was disconnected and freed. reset controller. 00:20:57.802 [2024-04-27 02:40:31.255110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbadd0 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd880 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205fdf0 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b9eb0 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f125f0 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aba370 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb62c0 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1ad10 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1eb10 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b8ec0 (9): Bad file descriptor 00:20:57.802 [2024-04-27 02:40:31.255266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.255282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.255298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.255306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.255315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.255323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.255332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.802 [2024-04-27 02:40:31.255339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.802 [2024-04-27 02:40:31.255349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.255987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.255994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.803 [2024-04-27 02:40:31.256003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.803 [2024-04-27 02:40:31.256010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.256360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.256413] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x203d070 was disconnected and freed. reset controller. 00:20:57.804 [2024-04-27 02:40:31.263891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:57.804 [2024-04-27 02:40:31.263921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:57.804 [2024-04-27 02:40:31.263931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:57.804 [2024-04-27 02:40:31.265040] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.804 [2024-04-27 02:40:31.265088] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.804 [2024-04-27 02:40:31.265425] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.804 [2024-04-27 02:40:31.265442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:57.804 [2024-04-27 02:40:31.265452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:57.804 [2024-04-27 02:40:31.265462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.804 [2024-04-27 02:40:31.265905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.266141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.266152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1eb10 with addr=10.0.0.2, port=4420 00:20:57.804 [2024-04-27 02:40:31.266160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1eb10 is same with the state(5) to be set 00:20:57.804 [2024-04-27 02:40:31.266593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.267142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.267155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1ad10 with addr=10.0.0.2, port=4420 00:20:57.804 [2024-04-27 02:40:31.267166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1ad10 is same with the state(5) to be set 00:20:57.804 [2024-04-27 02:40:31.267526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.268031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.268045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b9eb0 with addr=10.0.0.2, port=4420 00:20:57.804 [2024-04-27 02:40:31.268055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b9eb0 is same with the state(5) to be set 00:20:57.804 [2024-04-27 02:40:31.268482] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:57.804 [2024-04-27 02:40:31.268781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.269262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.269272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f125f0 with addr=10.0.0.2, port=4420 00:20:57.804 [2024-04-27 02:40:31.269290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f125f0 is same with the state(5) to be set 00:20:57.804 [2024-04-27 02:40:31.269780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.270122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.270133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205fdf0 with addr=10.0.0.2, port=4420 00:20:57.804 [2024-04-27 02:40:31.270140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdf0 is same with the state(5) to be set 00:20:57.804 [2024-04-27 02:40:31.270616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.271170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.804 [2024-04-27 02:40:31.271184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aba370 with addr=10.0.0.2, port=4420 00:20:57.804 [2024-04-27 02:40:31.271193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aba370 is same with the state(5) to be set 00:20:57.804 [2024-04-27 02:40:31.271209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1eb10 (9): Bad file descriptor 00:20:57.804 [2024-04-27 02:40:31.271220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1ad10 (9): Bad file descriptor 00:20:57.804 [2024-04-27 02:40:31.271229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b9eb0 (9): Bad file descriptor 00:20:57.804 [2024-04-27 02:40:31.271333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.271348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.271364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.271373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.271383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.271391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.271400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.804 [2024-04-27 02:40:31.271407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.804 [2024-04-27 02:40:31.271417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.271992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.271999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.272016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.272033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.272053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.272072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.272089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.805 [2024-04-27 02:40:31.272106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.805 [2024-04-27 02:40:31.272116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.272452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.272461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee7880 is same with the state(5) to be set 00:20:57.806 [2024-04-27 02:40:31.273756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.273986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.273994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.806 [2024-04-27 02:40:31.274106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.806 [2024-04-27 02:40:31.274114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.807 [2024-04-27 02:40:31.274708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.807 [2024-04-27 02:40:31.274716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.274886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.274894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee8d30 is same with the state(5) to be set 00:20:57.808 [2024-04-27 02:40:31.276165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.808 [2024-04-27 02:40:31.276682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.808 [2024-04-27 02:40:31.276692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.276987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.276997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.277291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.277299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9ee80 is same with the state(5) to be set 00:20:57.809 [2024-04-27 02:40:31.279221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.279240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.279252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.279259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.279269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.279288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.809 [2024-04-27 02:40:31.279298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.809 [2024-04-27 02:40:31.279307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.279985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.810 [2024-04-27 02:40:31.279994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.810 [2024-04-27 02:40:31.280002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.811 [2024-04-27 02:40:31.280342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.811 [2024-04-27 02:40:31.280351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20351c0 is same with the state(5) to be set 00:20:57.811 [2024-04-27 02:40:31.282143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:57.811 [2024-04-27 02:40:31.282165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:57.811 [2024-04-27 02:40:31.282176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:57.811 task offset: 28928 on job bdev=Nvme2n1 fails 00:20:57.811 00:20:57.811 Latency(us) 00:20:57.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.811 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.811 Job: Nvme1n1 ended in about 0.95 seconds with error 00:20:57.811 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme1n1 : 0.95 203.04 12.69 67.68 0.00 233759.15 25886.72 244667.73 00:20:57.812 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme2n1 ended in about 0.94 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme2n1 : 0.94 204.35 12.77 68.12 0.00 227377.07 28180.48 227191.47 00:20:57.812 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme3n1 ended in about 0.94 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme3n1 : 0.94 136.06 8.50 68.03 0.00 297214.58 23483.73 286610.77 00:20:57.812 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme4n1 ended in about 0.94 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme4n1 : 0.94 135.89 8.49 67.94 0.00 291130.03 23811.41 281367.89 00:20:57.812 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme5n1 ended in about 0.94 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme5n1 : 0.94 203.58 12.72 67.86 0.00 213726.93 23046.83 242920.11 00:20:57.812 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme6n1 ended in about 0.96 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme6n1 : 0.96 133.95 8.37 66.97 0.00 282779.31 24029.87 281367.89 00:20:57.812 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme7n1 ended in about 0.96 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme7n1 : 0.96 200.42 12.53 66.81 0.00 207799.04 21408.43 241172.48 00:20:57.812 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme8n1 ended in about 0.94 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme8n1 : 0.94 203.31 12.71 67.77 0.00 199622.61 23265.28 242920.11 00:20:57.812 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme9n1 ended in about 0.96 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme9n1 : 0.96 133.28 8.33 66.64 0.00 265231.93 24794.45 230686.72 00:20:57.812 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.812 Job: Nvme10n1 ended in about 0.96 seconds with error 00:20:57.812 Verification LBA range: start 0x0 length 0x400 00:20:57.812 Nvme10n1 : 0.96 132.86 8.30 66.43 0.00 259939.84 22282.24 260396.37 00:20:57.812 =================================================================================================================== 00:20:57.812 Total : 1686.73 105.42 674.25 0.00 243372.18 21408.43 286610.77 00:20:57.812 [2024-04-27 02:40:31.309377] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:57.812 [2024-04-27 02:40:31.309421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:57.812 [2024-04-27 02:40:31.309484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f125f0 (9): Bad file descriptor 00:20:57.812 [2024-04-27 02:40:31.309498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205fdf0 (9): Bad file descriptor 00:20:57.812 [2024-04-27 02:40:31.309508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aba370 (9): Bad file descriptor 00:20:57.812 [2024-04-27 02:40:31.309516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:57.812 [2024-04-27 02:40:31.309523] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:57.812 [2024-04-27 02:40:31.309531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:57.812 [2024-04-27 02:40:31.309552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:57.812 [2024-04-27 02:40:31.309559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:57.812 [2024-04-27 02:40:31.309566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:57.812 [2024-04-27 02:40:31.309577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:57.812 [2024-04-27 02:40:31.309583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:57.812 [2024-04-27 02:40:31.309590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:57.812 [2024-04-27 02:40:31.309622] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.309639] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.309652] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.309668] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.309680] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.309690] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.309789] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.812 [2024-04-27 02:40:31.309799] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.812 [2024-04-27 02:40:31.309805] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.812 [2024-04-27 02:40:31.310194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.310561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.310572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b8ec0 with addr=10.0.0.2, port=4420 00:20:57.812 [2024-04-27 02:40:31.310582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b8ec0 is same with the state(5) to be set 00:20:57.812 [2024-04-27 02:40:31.310907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.311014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.311023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbadd0 with addr=10.0.0.2, port=4420 00:20:57.812 [2024-04-27 02:40:31.311031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbadd0 is same with the state(5) to be set 00:20:57.812 [2024-04-27 02:40:31.311450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.311915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.311926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbd880 with addr=10.0.0.2, port=4420 00:20:57.812 [2024-04-27 02:40:31.311934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd880 is same with the state(5) to be set 00:20:57.812 [2024-04-27 02:40:31.312297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.312590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.812 [2024-04-27 02:40:31.312600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb62c0 with addr=10.0.0.2, port=4420 00:20:57.812 [2024-04-27 02:40:31.312607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb62c0 is same with the state(5) to be set 00:20:57.812 [2024-04-27 02:40:31.312618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:57.812 [2024-04-27 02:40:31.312624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:57.812 [2024-04-27 02:40:31.312631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:57.812 [2024-04-27 02:40:31.312641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:57.812 [2024-04-27 02:40:31.312648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:57.812 [2024-04-27 02:40:31.312654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:57.812 [2024-04-27 02:40:31.312664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:57.812 [2024-04-27 02:40:31.312670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:57.812 [2024-04-27 02:40:31.312677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.812 [2024-04-27 02:40:31.312715] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.312727] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.312737] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:57.812 [2024-04-27 02:40:31.313806] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.812 [2024-04-27 02:40:31.313819] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.812 [2024-04-27 02:40:31.313825] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.812 [2024-04-27 02:40:31.313845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b8ec0 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.313855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbadd0 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.313864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd880 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.313873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb62c0 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.313938] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:57.813 [2024-04-27 02:40:31.313951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:57.813 [2024-04-27 02:40:31.313959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:57.813 [2024-04-27 02:40:31.313984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.313991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.313998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:57.813 [2024-04-27 02:40:31.314007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.314013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.314020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:57.813 [2024-04-27 02:40:31.314029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.314036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.314045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:57.813 [2024-04-27 02:40:31.314055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.314061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.314068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:57.813 [2024-04-27 02:40:31.314355] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.813 [2024-04-27 02:40:31.314368] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.813 [2024-04-27 02:40:31.314374] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.813 [2024-04-27 02:40:31.314381] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.813 [2024-04-27 02:40:31.314649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.813 [2024-04-27 02:40:31.315161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.813 [2024-04-27 02:40:31.315173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b9eb0 with addr=10.0.0.2, port=4420 00:20:57.813 [2024-04-27 02:40:31.315181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b9eb0 is same with the state(5) to be set 00:20:57.813 [2024-04-27 02:40:31.315662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.813 [2024-04-27 02:40:31.316130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.813 [2024-04-27 02:40:31.316141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1ad10 with addr=10.0.0.2, port=4420 00:20:57.813 [2024-04-27 02:40:31.316148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1ad10 is same with the state(5) to be set 00:20:57.813 [2024-04-27 02:40:31.316627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.813 [2024-04-27 02:40:31.316999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.813 [2024-04-27 02:40:31.317009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1eb10 with addr=10.0.0.2, port=4420 00:20:57.813 [2024-04-27 02:40:31.317016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1eb10 is same with the state(5) to be set 00:20:57.813 [2024-04-27 02:40:31.317054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b9eb0 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.317064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1ad10 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.317073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1eb10 (9): Bad file descriptor 00:20:57.813 [2024-04-27 02:40:31.317101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.317109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.317117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:57.813 [2024-04-27 02:40:31.317128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.317134] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.317141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:57.813 [2024-04-27 02:40:31.317150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:57.813 [2024-04-27 02:40:31.317156] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:57.813 [2024-04-27 02:40:31.317166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:57.813 [2024-04-27 02:40:31.317197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.813 [2024-04-27 02:40:31.317204] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:57.813 [2024-04-27 02:40:31.317210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.074 02:40:31 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:58.074 02:40:31 -- target/shutdown.sh@139 -- # sleep 1 00:20:59.017 02:40:32 -- target/shutdown.sh@142 -- # kill -9 179584 00:20:59.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (179584) - No such process 00:20:59.017 02:40:32 -- target/shutdown.sh@142 -- # true 00:20:59.017 02:40:32 -- target/shutdown.sh@144 -- # stoptarget 00:20:59.017 02:40:32 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:59.017 02:40:32 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:59.017 02:40:32 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.017 02:40:32 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:59.017 02:40:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:59.017 02:40:32 -- nvmf/common.sh@117 -- # sync 00:20:59.017 02:40:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.017 02:40:32 -- nvmf/common.sh@120 -- # set +e 00:20:59.017 02:40:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.017 02:40:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.017 rmmod nvme_tcp 00:20:59.017 rmmod nvme_fabrics 00:20:59.017 rmmod nvme_keyring 00:20:59.017 02:40:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.017 02:40:32 -- nvmf/common.sh@124 -- # set -e 00:20:59.017 02:40:32 -- nvmf/common.sh@125 -- # return 0 00:20:59.017 02:40:32 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:59.017 02:40:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:59.017 02:40:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:59.017 02:40:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:59.017 02:40:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.017 02:40:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.017 02:40:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.017 02:40:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.017 02:40:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.564 02:40:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.564 00:21:01.564 real 0m7.760s 00:21:01.564 user 0m18.832s 00:21:01.564 sys 0m1.249s 00:21:01.564 02:40:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:01.564 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.564 ************************************ 00:21:01.564 END TEST nvmf_shutdown_tc3 00:21:01.564 ************************************ 00:21:01.564 02:40:34 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:01.564 00:21:01.564 real 0m32.568s 00:21:01.564 user 1m16.093s 00:21:01.564 sys 0m9.347s 00:21:01.564 02:40:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:01.564 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.564 ************************************ 00:21:01.564 END TEST nvmf_shutdown 00:21:01.564 ************************************ 00:21:01.564 02:40:34 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:21:01.564 02:40:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.564 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.564 02:40:34 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:21:01.564 02:40:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:01.564 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.564 02:40:34 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:21:01.564 02:40:34 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:01.564 02:40:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:01.564 02:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.564 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.564 ************************************ 00:21:01.564 START TEST nvmf_multicontroller 00:21:01.564 ************************************ 00:21:01.564 02:40:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:01.564 * Looking for test storage... 00:21:01.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:01.564 02:40:35 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.564 02:40:35 -- nvmf/common.sh@7 -- # uname -s 00:21:01.564 02:40:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.564 02:40:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.564 02:40:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.564 02:40:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.564 02:40:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.564 02:40:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.564 02:40:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.564 02:40:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.564 02:40:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.564 02:40:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.564 02:40:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.564 02:40:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.564 02:40:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.564 02:40:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.564 02:40:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.564 02:40:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.564 02:40:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.564 02:40:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.564 02:40:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.564 02:40:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.564 02:40:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.564 02:40:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.564 02:40:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.564 02:40:35 -- paths/export.sh@5 -- # export PATH 00:21:01.564 02:40:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.564 02:40:35 -- nvmf/common.sh@47 -- # : 0 00:21:01.564 02:40:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.564 02:40:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.565 02:40:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.565 02:40:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.565 02:40:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.565 02:40:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.565 02:40:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.565 02:40:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.565 02:40:35 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:01.565 02:40:35 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:01.565 02:40:35 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:01.565 02:40:35 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:01.565 02:40:35 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.565 02:40:35 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:01.565 02:40:35 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:01.565 02:40:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:01.565 02:40:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.565 02:40:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:01.565 02:40:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:01.565 02:40:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:01.565 02:40:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.565 02:40:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.565 02:40:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.565 02:40:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:01.565 02:40:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:01.565 02:40:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.565 02:40:35 -- common/autotest_common.sh@10 -- # set +x 00:21:08.160 02:40:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:08.160 02:40:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.160 02:40:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.160 02:40:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.160 02:40:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.160 02:40:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.160 02:40:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.160 02:40:41 -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.160 02:40:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.160 02:40:41 -- nvmf/common.sh@296 -- # e810=() 00:21:08.160 02:40:41 -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.160 02:40:41 -- nvmf/common.sh@297 -- # x722=() 00:21:08.160 02:40:41 -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.160 02:40:41 -- nvmf/common.sh@298 -- # mlx=() 00:21:08.160 02:40:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.160 02:40:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.160 02:40:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.160 02:40:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.160 02:40:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.160 02:40:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.160 02:40:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:08.160 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:08.160 02:40:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.160 02:40:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:08.160 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:08.160 02:40:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.160 02:40:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.160 02:40:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.160 02:40:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:08.160 02:40:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.160 02:40:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:08.160 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:08.160 02:40:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.160 02:40:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.160 02:40:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.160 02:40:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:08.160 02:40:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.160 02:40:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:08.160 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:08.160 02:40:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.160 02:40:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:08.160 02:40:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:08.160 02:40:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:08.160 02:40:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:08.160 02:40:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.160 02:40:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.160 02:40:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.160 02:40:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.160 02:40:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.160 02:40:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.160 02:40:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.160 02:40:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.160 02:40:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.160 02:40:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.160 02:40:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.160 02:40:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.161 02:40:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.161 02:40:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.161 02:40:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.161 02:40:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.161 02:40:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.161 02:40:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.161 02:40:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.161 02:40:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:21:08.161 00:21:08.161 --- 10.0.0.2 ping statistics --- 00:21:08.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.161 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:21:08.161 02:40:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.435 ms 00:21:08.161 00:21:08.161 --- 10.0.0.1 ping statistics --- 00:21:08.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.161 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:21:08.161 02:40:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.161 02:40:41 -- nvmf/common.sh@411 -- # return 0 00:21:08.161 02:40:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:08.161 02:40:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.161 02:40:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:08.161 02:40:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:08.161 02:40:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.161 02:40:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:08.161 02:40:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:08.161 02:40:41 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:08.161 02:40:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:08.161 02:40:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:08.161 02:40:41 -- common/autotest_common.sh@10 -- # set +x 00:21:08.161 02:40:41 -- nvmf/common.sh@470 -- # nvmfpid=184389 00:21:08.161 02:40:41 -- nvmf/common.sh@471 -- # waitforlisten 184389 00:21:08.161 02:40:41 -- common/autotest_common.sh@817 -- # '[' -z 184389 ']' 00:21:08.161 02:40:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.161 02:40:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.161 02:40:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.161 02:40:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.161 02:40:41 -- common/autotest_common.sh@10 -- # set +x 00:21:08.161 02:40:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:08.161 [2024-04-27 02:40:41.719700] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:08.161 [2024-04-27 02:40:41.719767] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.161 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.422 [2024-04-27 02:40:41.791807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:08.422 [2024-04-27 02:40:41.863579] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.422 [2024-04-27 02:40:41.863617] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.422 [2024-04-27 02:40:41.863625] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.422 [2024-04-27 02:40:41.863631] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.422 [2024-04-27 02:40:41.863637] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.422 [2024-04-27 02:40:41.863749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.422 [2024-04-27 02:40:41.863874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.422 [2024-04-27 02:40:41.863876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.995 02:40:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.995 02:40:42 -- common/autotest_common.sh@850 -- # return 0 00:21:08.995 02:40:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:08.995 02:40:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:08.995 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.995 02:40:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.995 02:40:42 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.995 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.995 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.995 [2024-04-27 02:40:42.551930] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.995 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.995 02:40:42 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:08.995 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.995 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.995 Malloc0 00:21:08.995 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.995 02:40:42 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.995 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.995 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.995 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.995 02:40:42 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.995 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.995 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 [2024-04-27 02:40:42.626636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 [2024-04-27 02:40:42.638606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 Malloc1 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:09.256 02:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 02:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.256 02:40:42 -- host/multicontroller.sh@44 -- # bdevperf_pid=184676 00:21:09.256 02:40:42 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.256 02:40:42 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:09.256 02:40:42 -- host/multicontroller.sh@47 -- # waitforlisten 184676 /var/tmp/bdevperf.sock 00:21:09.256 02:40:42 -- common/autotest_common.sh@817 -- # '[' -z 184676 ']' 00:21:09.256 02:40:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.256 02:40:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.256 02:40:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.256 02:40:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.256 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:10.227 02:40:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.227 02:40:43 -- common/autotest_common.sh@850 -- # return 0 00:21:10.227 02:40:43 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:10.227 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.227 NVMe0n1 00:21:10.227 02:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.227 02:40:43 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:10.227 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.227 02:40:43 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:10.227 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.227 02:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.227 1 00:21:10.227 02:40:43 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:10.227 02:40:43 -- common/autotest_common.sh@638 -- # local es=0 00:21:10.227 02:40:43 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:10.227 02:40:43 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.227 02:40:43 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:10.227 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.227 request: 00:21:10.227 { 00:21:10.227 "name": "NVMe0", 00:21:10.227 "trtype": "tcp", 00:21:10.227 "traddr": "10.0.0.2", 00:21:10.227 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:10.227 "hostaddr": "10.0.0.2", 00:21:10.227 "hostsvcid": "60000", 00:21:10.227 "adrfam": "ipv4", 00:21:10.227 "trsvcid": "4420", 00:21:10.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.227 "method": "bdev_nvme_attach_controller", 00:21:10.227 "req_id": 1 00:21:10.227 } 00:21:10.227 Got JSON-RPC error response 00:21:10.227 response: 00:21:10.227 { 00:21:10.227 "code": -114, 00:21:10.227 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:10.227 } 00:21:10.227 02:40:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:10.227 02:40:43 -- common/autotest_common.sh@641 -- # es=1 00:21:10.227 02:40:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:10.227 02:40:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:10.227 02:40:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:10.227 02:40:43 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:10.227 02:40:43 -- common/autotest_common.sh@638 -- # local es=0 00:21:10.227 02:40:43 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:10.227 02:40:43 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.227 02:40:43 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:10.227 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.227 request: 00:21:10.227 { 00:21:10.227 "name": "NVMe0", 00:21:10.227 "trtype": "tcp", 00:21:10.227 "traddr": "10.0.0.2", 00:21:10.227 "hostaddr": "10.0.0.2", 00:21:10.227 "hostsvcid": "60000", 00:21:10.227 "adrfam": "ipv4", 00:21:10.227 "trsvcid": "4420", 00:21:10.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.227 "method": "bdev_nvme_attach_controller", 00:21:10.227 "req_id": 1 00:21:10.227 } 00:21:10.227 Got JSON-RPC error response 00:21:10.227 response: 00:21:10.227 { 00:21:10.227 "code": -114, 00:21:10.227 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:10.227 } 00:21:10.227 02:40:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:10.227 02:40:43 -- common/autotest_common.sh@641 -- # es=1 00:21:10.227 02:40:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:10.227 02:40:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:10.227 02:40:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:10.227 02:40:43 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@638 -- # local es=0 00:21:10.227 02:40:43 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:10.227 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.227 02:40:43 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.227 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.227 request: 00:21:10.227 { 00:21:10.227 "name": "NVMe0", 00:21:10.227 "trtype": "tcp", 00:21:10.227 "traddr": "10.0.0.2", 00:21:10.227 "hostaddr": "10.0.0.2", 00:21:10.227 "hostsvcid": "60000", 00:21:10.227 "adrfam": "ipv4", 00:21:10.227 "trsvcid": "4420", 00:21:10.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.227 "multipath": "disable", 00:21:10.227 "method": "bdev_nvme_attach_controller", 00:21:10.227 "req_id": 1 00:21:10.228 } 00:21:10.228 Got JSON-RPC error response 00:21:10.228 response: 00:21:10.228 { 00:21:10.228 "code": -114, 00:21:10.228 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:10.228 } 00:21:10.228 02:40:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:10.228 02:40:43 -- common/autotest_common.sh@641 -- # es=1 00:21:10.228 02:40:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:10.228 02:40:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:10.228 02:40:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:10.228 02:40:43 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:10.228 02:40:43 -- common/autotest_common.sh@638 -- # local es=0 00:21:10.228 02:40:43 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:10.228 02:40:43 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:10.228 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.228 02:40:43 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:10.228 02:40:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:10.228 02:40:43 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:10.228 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.228 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.228 request: 00:21:10.228 { 00:21:10.228 "name": "NVMe0", 00:21:10.228 "trtype": "tcp", 00:21:10.228 "traddr": "10.0.0.2", 00:21:10.228 "hostaddr": "10.0.0.2", 00:21:10.228 "hostsvcid": "60000", 00:21:10.228 "adrfam": "ipv4", 00:21:10.228 "trsvcid": "4420", 00:21:10.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.228 "multipath": "failover", 00:21:10.228 "method": "bdev_nvme_attach_controller", 00:21:10.228 "req_id": 1 00:21:10.228 } 00:21:10.228 Got JSON-RPC error response 00:21:10.228 response: 00:21:10.228 { 00:21:10.228 "code": -114, 00:21:10.228 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:10.228 } 00:21:10.228 02:40:43 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:10.228 02:40:43 -- common/autotest_common.sh@641 -- # es=1 00:21:10.228 02:40:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:10.228 02:40:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:10.228 02:40:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:10.228 02:40:43 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.228 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.228 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.228 00:21:10.228 02:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.228 02:40:43 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.228 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.228 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.493 02:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.493 02:40:43 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:10.493 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.494 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.494 00:21:10.494 02:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.494 02:40:43 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:10.494 02:40:43 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:10.494 02:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.494 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.494 02:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.494 02:40:43 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:10.494 02:40:43 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.879 0 00:21:11.879 02:40:45 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:11.879 02:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.879 02:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 02:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.879 02:40:45 -- host/multicontroller.sh@100 -- # killprocess 184676 00:21:11.879 02:40:45 -- common/autotest_common.sh@936 -- # '[' -z 184676 ']' 00:21:11.879 02:40:45 -- common/autotest_common.sh@940 -- # kill -0 184676 00:21:11.879 02:40:45 -- common/autotest_common.sh@941 -- # uname 00:21:11.879 02:40:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.879 02:40:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 184676 00:21:11.879 02:40:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:11.879 02:40:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:11.879 02:40:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 184676' 00:21:11.879 killing process with pid 184676 00:21:11.879 02:40:45 -- common/autotest_common.sh@955 -- # kill 184676 00:21:11.879 02:40:45 -- common/autotest_common.sh@960 -- # wait 184676 00:21:11.879 02:40:45 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.879 02:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.879 02:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 02:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.879 02:40:45 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:11.879 02:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.879 02:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 02:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.879 02:40:45 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:11.879 02:40:45 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:11.879 02:40:45 -- common/autotest_common.sh@1598 -- # read -r file 00:21:11.879 02:40:45 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:11.879 02:40:45 -- common/autotest_common.sh@1597 -- # sort -u 00:21:11.879 02:40:45 -- common/autotest_common.sh@1599 -- # cat 00:21:11.879 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:11.880 [2024-04-27 02:40:42.755942] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:11.880 [2024-04-27 02:40:42.755995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184676 ] 00:21:11.880 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.880 [2024-04-27 02:40:42.813626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.880 [2024-04-27 02:40:42.875857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.880 [2024-04-27 02:40:43.946134] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 73217215-a108-4950-9237-3a194e9ec8bb already exists 00:21:11.880 [2024-04-27 02:40:43.946163] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:73217215-a108-4950-9237-3a194e9ec8bb alias for bdev NVMe1n1 00:21:11.880 [2024-04-27 02:40:43.946173] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:11.880 Running I/O for 1 seconds... 00:21:11.880 00:21:11.880 Latency(us) 00:21:11.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.880 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:11.880 NVMe0n1 : 1.01 19611.02 76.61 0.00 0.00 6509.66 4587.52 17585.49 00:21:11.880 =================================================================================================================== 00:21:11.880 Total : 19611.02 76.61 0.00 0.00 6509.66 4587.52 17585.49 00:21:11.880 Received shutdown signal, test time was about 1.000000 seconds 00:21:11.880 00:21:11.880 Latency(us) 00:21:11.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.880 =================================================================================================================== 00:21:11.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.880 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:11.880 02:40:45 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:11.880 02:40:45 -- common/autotest_common.sh@1598 -- # read -r file 00:21:11.880 02:40:45 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:11.880 02:40:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:11.880 02:40:45 -- nvmf/common.sh@117 -- # sync 00:21:11.880 02:40:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.880 02:40:45 -- nvmf/common.sh@120 -- # set +e 00:21:11.880 02:40:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.880 02:40:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.880 rmmod nvme_tcp 00:21:11.880 rmmod nvme_fabrics 00:21:11.880 rmmod nvme_keyring 00:21:11.880 02:40:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.880 02:40:45 -- nvmf/common.sh@124 -- # set -e 00:21:11.880 02:40:45 -- nvmf/common.sh@125 -- # return 0 00:21:11.880 02:40:45 -- nvmf/common.sh@478 -- # '[' -n 184389 ']' 00:21:11.880 02:40:45 -- nvmf/common.sh@479 -- # killprocess 184389 00:21:11.880 02:40:45 -- common/autotest_common.sh@936 -- # '[' -z 184389 ']' 00:21:11.880 02:40:45 -- common/autotest_common.sh@940 -- # kill -0 184389 00:21:11.880 02:40:45 -- common/autotest_common.sh@941 -- # uname 00:21:11.880 02:40:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.880 02:40:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 184389 00:21:11.880 02:40:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:11.880 02:40:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:11.880 02:40:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 184389' 00:21:11.880 killing process with pid 184389 00:21:11.880 02:40:45 -- common/autotest_common.sh@955 -- # kill 184389 00:21:11.880 02:40:45 -- common/autotest_common.sh@960 -- # wait 184389 00:21:12.141 02:40:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:12.141 02:40:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:12.141 02:40:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:12.141 02:40:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.141 02:40:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.141 02:40:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.141 02:40:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.141 02:40:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.691 02:40:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.691 00:21:14.691 real 0m12.746s 00:21:14.691 user 0m15.891s 00:21:14.691 sys 0m5.657s 00:21:14.691 02:40:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:14.691 02:40:47 -- common/autotest_common.sh@10 -- # set +x 00:21:14.691 ************************************ 00:21:14.691 END TEST nvmf_multicontroller 00:21:14.691 ************************************ 00:21:14.691 02:40:47 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:14.691 02:40:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:14.691 02:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.691 02:40:47 -- common/autotest_common.sh@10 -- # set +x 00:21:14.691 ************************************ 00:21:14.691 START TEST nvmf_aer 00:21:14.691 ************************************ 00:21:14.691 02:40:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:14.691 * Looking for test storage... 00:21:14.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.691 02:40:48 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.691 02:40:48 -- nvmf/common.sh@7 -- # uname -s 00:21:14.691 02:40:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.691 02:40:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.691 02:40:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.691 02:40:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.691 02:40:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.691 02:40:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.691 02:40:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.691 02:40:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.691 02:40:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.691 02:40:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.691 02:40:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.691 02:40:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.691 02:40:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.691 02:40:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.691 02:40:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.691 02:40:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.691 02:40:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.691 02:40:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.691 02:40:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.691 02:40:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.691 02:40:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.691 02:40:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.691 02:40:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.691 02:40:48 -- paths/export.sh@5 -- # export PATH 00:21:14.691 02:40:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.691 02:40:48 -- nvmf/common.sh@47 -- # : 0 00:21:14.691 02:40:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.691 02:40:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.691 02:40:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.691 02:40:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.691 02:40:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.691 02:40:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.691 02:40:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.691 02:40:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.691 02:40:48 -- host/aer.sh@11 -- # nvmftestinit 00:21:14.691 02:40:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:14.691 02:40:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.691 02:40:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:14.691 02:40:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:14.691 02:40:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:14.691 02:40:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.691 02:40:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.691 02:40:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.691 02:40:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:14.691 02:40:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:14.691 02:40:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.691 02:40:48 -- common/autotest_common.sh@10 -- # set +x 00:21:21.287 02:40:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:21.287 02:40:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.287 02:40:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.287 02:40:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.287 02:40:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.287 02:40:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.287 02:40:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.287 02:40:54 -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.287 02:40:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.287 02:40:54 -- nvmf/common.sh@296 -- # e810=() 00:21:21.287 02:40:54 -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.287 02:40:54 -- nvmf/common.sh@297 -- # x722=() 00:21:21.287 02:40:54 -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.287 02:40:54 -- nvmf/common.sh@298 -- # mlx=() 00:21:21.287 02:40:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.287 02:40:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.287 02:40:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.287 02:40:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.287 02:40:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.287 02:40:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.287 02:40:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:21.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:21.287 02:40:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.287 02:40:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:21.287 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:21.287 02:40:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.287 02:40:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.287 02:40:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.287 02:40:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:21.287 02:40:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.287 02:40:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:21.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:21.287 02:40:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.287 02:40:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.287 02:40:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.287 02:40:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:21.287 02:40:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.287 02:40:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:21.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:21.287 02:40:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.287 02:40:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:21.287 02:40:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:21.287 02:40:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:21.287 02:40:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:21.287 02:40:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.287 02:40:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.287 02:40:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.287 02:40:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.287 02:40:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.287 02:40:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.287 02:40:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.287 02:40:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.287 02:40:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.287 02:40:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.287 02:40:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.287 02:40:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.287 02:40:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.549 02:40:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.549 02:40:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.549 02:40:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.549 02:40:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.549 02:40:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.549 02:40:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.549 02:40:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:21:21.549 00:21:21.549 --- 10.0.0.2 ping statistics --- 00:21:21.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.549 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:21:21.549 02:40:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:21:21.549 00:21:21.549 --- 10.0.0.1 ping statistics --- 00:21:21.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.549 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:21:21.549 02:40:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.549 02:40:55 -- nvmf/common.sh@411 -- # return 0 00:21:21.549 02:40:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:21.549 02:40:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.549 02:40:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:21.549 02:40:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:21.549 02:40:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.549 02:40:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:21.549 02:40:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:21.810 02:40:55 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:21.810 02:40:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:21.810 02:40:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:21.810 02:40:55 -- common/autotest_common.sh@10 -- # set +x 00:21:21.810 02:40:55 -- nvmf/common.sh@470 -- # nvmfpid=189364 00:21:21.810 02:40:55 -- nvmf/common.sh@471 -- # waitforlisten 189364 00:21:21.810 02:40:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.810 02:40:55 -- common/autotest_common.sh@817 -- # '[' -z 189364 ']' 00:21:21.810 02:40:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.810 02:40:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:21.810 02:40:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.810 02:40:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:21.810 02:40:55 -- common/autotest_common.sh@10 -- # set +x 00:21:21.810 [2024-04-27 02:40:55.245528] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:21.810 [2024-04-27 02:40:55.245596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.810 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.810 [2024-04-27 02:40:55.316570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.810 [2024-04-27 02:40:55.389422] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.810 [2024-04-27 02:40:55.389460] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.810 [2024-04-27 02:40:55.389468] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.810 [2024-04-27 02:40:55.389474] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.810 [2024-04-27 02:40:55.389480] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.810 [2024-04-27 02:40:55.389596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.810 [2024-04-27 02:40:55.389714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.810 [2024-04-27 02:40:55.389841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.810 [2024-04-27 02:40:55.389844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.752 02:40:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:22.752 02:40:56 -- common/autotest_common.sh@850 -- # return 0 00:21:22.752 02:40:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:22.752 02:40:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 02:40:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.752 02:40:56 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.752 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 [2024-04-27 02:40:56.072827] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.752 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.752 02:40:56 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:22.752 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 Malloc0 00:21:22.752 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.752 02:40:56 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:22.752 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.752 02:40:56 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.752 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.752 02:40:56 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.752 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 [2024-04-27 02:40:56.132220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.752 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.752 02:40:56 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:22.752 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.752 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:22.752 [2024-04-27 02:40:56.144036] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:22.752 [ 00:21:22.752 { 00:21:22.752 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:22.752 "subtype": "Discovery", 00:21:22.752 "listen_addresses": [], 00:21:22.752 "allow_any_host": true, 00:21:22.752 "hosts": [] 00:21:22.752 }, 00:21:22.752 { 00:21:22.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.752 "subtype": "NVMe", 00:21:22.752 "listen_addresses": [ 00:21:22.752 { 00:21:22.752 "transport": "TCP", 00:21:22.752 "trtype": "TCP", 00:21:22.752 "adrfam": "IPv4", 00:21:22.752 "traddr": "10.0.0.2", 00:21:22.752 "trsvcid": "4420" 00:21:22.752 } 00:21:22.752 ], 00:21:22.752 "allow_any_host": true, 00:21:22.752 "hosts": [], 00:21:22.752 "serial_number": "SPDK00000000000001", 00:21:22.752 "model_number": "SPDK bdev Controller", 00:21:22.752 "max_namespaces": 2, 00:21:22.752 "min_cntlid": 1, 00:21:22.752 "max_cntlid": 65519, 00:21:22.752 "namespaces": [ 00:21:22.752 { 00:21:22.752 "nsid": 1, 00:21:22.752 "bdev_name": "Malloc0", 00:21:22.752 "name": "Malloc0", 00:21:22.752 "nguid": "648D7F893428496EBCBB35B5170A2AB6", 00:21:22.752 "uuid": "648d7f89-3428-496e-bcbb-35b5170a2ab6" 00:21:22.752 } 00:21:22.752 ] 00:21:22.752 } 00:21:22.752 ] 00:21:22.752 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.752 02:40:56 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:22.752 02:40:56 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:22.752 02:40:56 -- host/aer.sh@33 -- # aerpid=189633 00:21:22.752 02:40:56 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:22.752 02:40:56 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:22.752 02:40:56 -- common/autotest_common.sh@1251 -- # local i=0 00:21:22.752 02:40:56 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.752 02:40:56 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:22.752 02:40:56 -- common/autotest_common.sh@1254 -- # i=1 00:21:22.752 02:40:56 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:22.752 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.752 02:40:56 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.752 02:40:56 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:22.752 02:40:56 -- common/autotest_common.sh@1254 -- # i=2 00:21:22.752 02:40:56 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:22.752 02:40:56 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.752 02:40:56 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:22.753 02:40:56 -- common/autotest_common.sh@1262 -- # return 0 00:21:22.753 02:40:56 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:22.753 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.753 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.013 Malloc1 00:21:23.013 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.013 02:40:56 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:23.013 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.013 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.013 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.013 02:40:56 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:23.013 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.013 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.013 Asynchronous Event Request test 00:21:23.013 Attaching to 10.0.0.2 00:21:23.013 Attached to 10.0.0.2 00:21:23.013 Registering asynchronous event callbacks... 00:21:23.013 Starting namespace attribute notice tests for all controllers... 00:21:23.013 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:23.013 aer_cb - Changed Namespace 00:21:23.013 Cleaning up... 00:21:23.013 [ 00:21:23.013 { 00:21:23.013 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:23.013 "subtype": "Discovery", 00:21:23.013 "listen_addresses": [], 00:21:23.013 "allow_any_host": true, 00:21:23.013 "hosts": [] 00:21:23.013 }, 00:21:23.013 { 00:21:23.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.013 "subtype": "NVMe", 00:21:23.013 "listen_addresses": [ 00:21:23.013 { 00:21:23.013 "transport": "TCP", 00:21:23.013 "trtype": "TCP", 00:21:23.013 "adrfam": "IPv4", 00:21:23.013 "traddr": "10.0.0.2", 00:21:23.013 "trsvcid": "4420" 00:21:23.013 } 00:21:23.013 ], 00:21:23.013 "allow_any_host": true, 00:21:23.013 "hosts": [], 00:21:23.013 "serial_number": "SPDK00000000000001", 00:21:23.013 "model_number": "SPDK bdev Controller", 00:21:23.013 "max_namespaces": 2, 00:21:23.013 "min_cntlid": 1, 00:21:23.013 "max_cntlid": 65519, 00:21:23.013 "namespaces": [ 00:21:23.013 { 00:21:23.013 "nsid": 1, 00:21:23.013 "bdev_name": "Malloc0", 00:21:23.013 "name": "Malloc0", 00:21:23.013 "nguid": "648D7F893428496EBCBB35B5170A2AB6", 00:21:23.013 "uuid": "648d7f89-3428-496e-bcbb-35b5170a2ab6" 00:21:23.013 }, 00:21:23.013 { 00:21:23.013 "nsid": 2, 00:21:23.013 "bdev_name": "Malloc1", 00:21:23.013 "name": "Malloc1", 00:21:23.013 "nguid": "44313AA99B8C4C418F232F8F066A98C7", 00:21:23.013 "uuid": "44313aa9-9b8c-4c41-8f23-2f8f066a98c7" 00:21:23.013 } 00:21:23.013 ] 00:21:23.013 } 00:21:23.013 ] 00:21:23.013 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.013 02:40:56 -- host/aer.sh@43 -- # wait 189633 00:21:23.013 02:40:56 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:23.013 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.013 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.013 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.013 02:40:56 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:23.013 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.013 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.013 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.013 02:40:56 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.013 02:40:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.013 02:40:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.013 02:40:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.013 02:40:56 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:23.014 02:40:56 -- host/aer.sh@51 -- # nvmftestfini 00:21:23.014 02:40:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:23.014 02:40:56 -- nvmf/common.sh@117 -- # sync 00:21:23.014 02:40:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.014 02:40:56 -- nvmf/common.sh@120 -- # set +e 00:21:23.014 02:40:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.014 02:40:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.014 rmmod nvme_tcp 00:21:23.014 rmmod nvme_fabrics 00:21:23.014 rmmod nvme_keyring 00:21:23.014 02:40:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.014 02:40:56 -- nvmf/common.sh@124 -- # set -e 00:21:23.014 02:40:56 -- nvmf/common.sh@125 -- # return 0 00:21:23.014 02:40:56 -- nvmf/common.sh@478 -- # '[' -n 189364 ']' 00:21:23.014 02:40:56 -- nvmf/common.sh@479 -- # killprocess 189364 00:21:23.014 02:40:56 -- common/autotest_common.sh@936 -- # '[' -z 189364 ']' 00:21:23.014 02:40:56 -- common/autotest_common.sh@940 -- # kill -0 189364 00:21:23.014 02:40:56 -- common/autotest_common.sh@941 -- # uname 00:21:23.014 02:40:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.014 02:40:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 189364 00:21:23.014 02:40:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.014 02:40:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.014 02:40:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 189364' 00:21:23.014 killing process with pid 189364 00:21:23.014 02:40:56 -- common/autotest_common.sh@955 -- # kill 189364 00:21:23.014 [2024-04-27 02:40:56.605918] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:23.014 02:40:56 -- common/autotest_common.sh@960 -- # wait 189364 00:21:23.275 02:40:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:23.275 02:40:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:23.275 02:40:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:23.275 02:40:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.275 02:40:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.275 02:40:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.275 02:40:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.275 02:40:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.819 02:40:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.819 00:21:25.819 real 0m10.919s 00:21:25.819 user 0m7.473s 00:21:25.819 sys 0m5.754s 00:21:25.819 02:40:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.819 02:40:58 -- common/autotest_common.sh@10 -- # set +x 00:21:25.819 ************************************ 00:21:25.819 END TEST nvmf_aer 00:21:25.819 ************************************ 00:21:25.819 02:40:58 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:25.819 02:40:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.819 02:40:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.819 02:40:58 -- common/autotest_common.sh@10 -- # set +x 00:21:25.819 ************************************ 00:21:25.819 START TEST nvmf_async_init 00:21:25.819 ************************************ 00:21:25.819 02:40:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:25.819 * Looking for test storage... 00:21:25.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.819 02:40:59 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.819 02:40:59 -- nvmf/common.sh@7 -- # uname -s 00:21:25.819 02:40:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.819 02:40:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.819 02:40:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.819 02:40:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.819 02:40:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.819 02:40:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.819 02:40:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.819 02:40:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.819 02:40:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.819 02:40:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.819 02:40:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.819 02:40:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.819 02:40:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.819 02:40:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.819 02:40:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.819 02:40:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.819 02:40:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.819 02:40:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.819 02:40:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.819 02:40:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.819 02:40:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.819 02:40:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.819 02:40:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.819 02:40:59 -- paths/export.sh@5 -- # export PATH 00:21:25.819 02:40:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.819 02:40:59 -- nvmf/common.sh@47 -- # : 0 00:21:25.819 02:40:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.819 02:40:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.819 02:40:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.819 02:40:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.819 02:40:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.819 02:40:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.819 02:40:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.819 02:40:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.819 02:40:59 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:25.819 02:40:59 -- host/async_init.sh@14 -- # null_block_size=512 00:21:25.819 02:40:59 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:25.819 02:40:59 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:25.819 02:40:59 -- host/async_init.sh@20 -- # uuidgen 00:21:25.819 02:40:59 -- host/async_init.sh@20 -- # tr -d - 00:21:25.819 02:40:59 -- host/async_init.sh@20 -- # nguid=d613026b7d734737ae7673715be562d7 00:21:25.819 02:40:59 -- host/async_init.sh@22 -- # nvmftestinit 00:21:25.819 02:40:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.819 02:40:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.820 02:40:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.820 02:40:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.820 02:40:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.820 02:40:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.820 02:40:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.820 02:40:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.820 02:40:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.820 02:40:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.820 02:40:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.820 02:40:59 -- common/autotest_common.sh@10 -- # set +x 00:21:32.413 02:41:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:32.413 02:41:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.413 02:41:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.413 02:41:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.413 02:41:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.413 02:41:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.413 02:41:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.413 02:41:05 -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.413 02:41:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.413 02:41:05 -- nvmf/common.sh@296 -- # e810=() 00:21:32.413 02:41:05 -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.413 02:41:05 -- nvmf/common.sh@297 -- # x722=() 00:21:32.413 02:41:05 -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.413 02:41:05 -- nvmf/common.sh@298 -- # mlx=() 00:21:32.413 02:41:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.413 02:41:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.413 02:41:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.413 02:41:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.413 02:41:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.413 02:41:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.413 02:41:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:32.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:32.413 02:41:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.413 02:41:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:32.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:32.413 02:41:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.413 02:41:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.413 02:41:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.413 02:41:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.413 02:41:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:32.413 02:41:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.413 02:41:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:32.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:32.413 02:41:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.414 02:41:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.414 02:41:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.414 02:41:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:32.414 02:41:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.414 02:41:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:32.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:32.414 02:41:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.414 02:41:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:32.414 02:41:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:32.414 02:41:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:32.414 02:41:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:32.414 02:41:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:32.414 02:41:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.414 02:41:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.414 02:41:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.414 02:41:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.414 02:41:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.414 02:41:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.414 02:41:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.414 02:41:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.414 02:41:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.414 02:41:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.414 02:41:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.414 02:41:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.414 02:41:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.414 02:41:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.414 02:41:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.414 02:41:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.414 02:41:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.414 02:41:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.414 02:41:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.414 02:41:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:21:32.414 00:21:32.414 --- 10.0.0.2 ping statistics --- 00:21:32.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.414 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:21:32.414 02:41:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:21:32.414 00:21:32.414 --- 10.0.0.1 ping statistics --- 00:21:32.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.414 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:21:32.414 02:41:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.414 02:41:06 -- nvmf/common.sh@411 -- # return 0 00:21:32.414 02:41:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:32.414 02:41:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.414 02:41:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:32.414 02:41:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:32.414 02:41:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.414 02:41:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:32.414 02:41:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:32.675 02:41:06 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:32.675 02:41:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:32.675 02:41:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:32.675 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.675 02:41:06 -- nvmf/common.sh@470 -- # nvmfpid=193734 00:21:32.675 02:41:06 -- nvmf/common.sh@471 -- # waitforlisten 193734 00:21:32.675 02:41:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:32.675 02:41:06 -- common/autotest_common.sh@817 -- # '[' -z 193734 ']' 00:21:32.675 02:41:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.675 02:41:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.675 02:41:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.675 02:41:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.675 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:32.675 [2024-04-27 02:41:06.111414] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:32.675 [2024-04-27 02:41:06.111463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.675 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.675 [2024-04-27 02:41:06.175831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.675 [2024-04-27 02:41:06.238731] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.675 [2024-04-27 02:41:06.238766] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.675 [2024-04-27 02:41:06.238774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.675 [2024-04-27 02:41:06.238781] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.675 [2024-04-27 02:41:06.238787] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.675 [2024-04-27 02:41:06.238814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.619 02:41:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:33.619 02:41:06 -- common/autotest_common.sh@850 -- # return 0 00:21:33.619 02:41:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:33.619 02:41:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:33.619 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 02:41:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.619 02:41:06 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:33.619 02:41:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 [2024-04-27 02:41:06.953555] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.619 02:41:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.619 02:41:06 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:33.619 02:41:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 null0 00:21:33.619 02:41:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.619 02:41:06 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:33.619 02:41:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 02:41:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.619 02:41:06 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:33.619 02:41:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 02:41:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.619 02:41:06 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d613026b7d734737ae7673715be562d7 00:21:33.619 02:41:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.619 02:41:07 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:33.619 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.619 [2024-04-27 02:41:07.009827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.619 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.619 02:41:07 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:33.619 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.619 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.880 nvme0n1 00:21:33.880 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.880 02:41:07 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:33.880 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.880 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.880 [ 00:21:33.880 { 00:21:33.880 "name": "nvme0n1", 00:21:33.880 "aliases": [ 00:21:33.881 "d613026b-7d73-4737-ae76-73715be562d7" 00:21:33.881 ], 00:21:33.881 "product_name": "NVMe disk", 00:21:33.881 "block_size": 512, 00:21:33.881 "num_blocks": 2097152, 00:21:33.881 "uuid": "d613026b-7d73-4737-ae76-73715be562d7", 00:21:33.881 "assigned_rate_limits": { 00:21:33.881 "rw_ios_per_sec": 0, 00:21:33.881 "rw_mbytes_per_sec": 0, 00:21:33.881 "r_mbytes_per_sec": 0, 00:21:33.881 "w_mbytes_per_sec": 0 00:21:33.881 }, 00:21:33.881 "claimed": false, 00:21:33.881 "zoned": false, 00:21:33.881 "supported_io_types": { 00:21:33.881 "read": true, 00:21:33.881 "write": true, 00:21:33.881 "unmap": false, 00:21:33.881 "write_zeroes": true, 00:21:33.881 "flush": true, 00:21:33.881 "reset": true, 00:21:33.881 "compare": true, 00:21:33.881 "compare_and_write": true, 00:21:33.881 "abort": true, 00:21:33.881 "nvme_admin": true, 00:21:33.881 "nvme_io": true 00:21:33.881 }, 00:21:33.881 "memory_domains": [ 00:21:33.881 { 00:21:33.881 "dma_device_id": "system", 00:21:33.881 "dma_device_type": 1 00:21:33.881 } 00:21:33.881 ], 00:21:33.881 "driver_specific": { 00:21:33.881 "nvme": [ 00:21:33.881 { 00:21:33.881 "trid": { 00:21:33.881 "trtype": "TCP", 00:21:33.881 "adrfam": "IPv4", 00:21:33.881 "traddr": "10.0.0.2", 00:21:33.881 "trsvcid": "4420", 00:21:33.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:33.881 }, 00:21:33.881 "ctrlr_data": { 00:21:33.881 "cntlid": 1, 00:21:33.881 "vendor_id": "0x8086", 00:21:33.881 "model_number": "SPDK bdev Controller", 00:21:33.881 "serial_number": "00000000000000000000", 00:21:33.881 "firmware_revision": "24.05", 00:21:33.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:33.881 "oacs": { 00:21:33.881 "security": 0, 00:21:33.881 "format": 0, 00:21:33.881 "firmware": 0, 00:21:33.881 "ns_manage": 0 00:21:33.881 }, 00:21:33.881 "multi_ctrlr": true, 00:21:33.881 "ana_reporting": false 00:21:33.881 }, 00:21:33.881 "vs": { 00:21:33.881 "nvme_version": "1.3" 00:21:33.881 }, 00:21:33.881 "ns_data": { 00:21:33.881 "id": 1, 00:21:33.881 "can_share": true 00:21:33.881 } 00:21:33.881 } 00:21:33.881 ], 00:21:33.881 "mp_policy": "active_passive" 00:21:33.881 } 00:21:33.881 } 00:21:33.881 ] 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 [2024-04-27 02:41:07.274362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:33.881 [2024-04-27 02:41:07.274432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226360 (9): Bad file descriptor 00:21:33.881 [2024-04-27 02:41:07.406370] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 [ 00:21:33.881 { 00:21:33.881 "name": "nvme0n1", 00:21:33.881 "aliases": [ 00:21:33.881 "d613026b-7d73-4737-ae76-73715be562d7" 00:21:33.881 ], 00:21:33.881 "product_name": "NVMe disk", 00:21:33.881 "block_size": 512, 00:21:33.881 "num_blocks": 2097152, 00:21:33.881 "uuid": "d613026b-7d73-4737-ae76-73715be562d7", 00:21:33.881 "assigned_rate_limits": { 00:21:33.881 "rw_ios_per_sec": 0, 00:21:33.881 "rw_mbytes_per_sec": 0, 00:21:33.881 "r_mbytes_per_sec": 0, 00:21:33.881 "w_mbytes_per_sec": 0 00:21:33.881 }, 00:21:33.881 "claimed": false, 00:21:33.881 "zoned": false, 00:21:33.881 "supported_io_types": { 00:21:33.881 "read": true, 00:21:33.881 "write": true, 00:21:33.881 "unmap": false, 00:21:33.881 "write_zeroes": true, 00:21:33.881 "flush": true, 00:21:33.881 "reset": true, 00:21:33.881 "compare": true, 00:21:33.881 "compare_and_write": true, 00:21:33.881 "abort": true, 00:21:33.881 "nvme_admin": true, 00:21:33.881 "nvme_io": true 00:21:33.881 }, 00:21:33.881 "memory_domains": [ 00:21:33.881 { 00:21:33.881 "dma_device_id": "system", 00:21:33.881 "dma_device_type": 1 00:21:33.881 } 00:21:33.881 ], 00:21:33.881 "driver_specific": { 00:21:33.881 "nvme": [ 00:21:33.881 { 00:21:33.881 "trid": { 00:21:33.881 "trtype": "TCP", 00:21:33.881 "adrfam": "IPv4", 00:21:33.881 "traddr": "10.0.0.2", 00:21:33.881 "trsvcid": "4420", 00:21:33.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:33.881 }, 00:21:33.881 "ctrlr_data": { 00:21:33.881 "cntlid": 2, 00:21:33.881 "vendor_id": "0x8086", 00:21:33.881 "model_number": "SPDK bdev Controller", 00:21:33.881 "serial_number": "00000000000000000000", 00:21:33.881 "firmware_revision": "24.05", 00:21:33.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:33.881 "oacs": { 00:21:33.881 "security": 0, 00:21:33.881 "format": 0, 00:21:33.881 "firmware": 0, 00:21:33.881 "ns_manage": 0 00:21:33.881 }, 00:21:33.881 "multi_ctrlr": true, 00:21:33.881 "ana_reporting": false 00:21:33.881 }, 00:21:33.881 "vs": { 00:21:33.881 "nvme_version": "1.3" 00:21:33.881 }, 00:21:33.881 "ns_data": { 00:21:33.881 "id": 1, 00:21:33.881 "can_share": true 00:21:33.881 } 00:21:33.881 } 00:21:33.881 ], 00:21:33.881 "mp_policy": "active_passive" 00:21:33.881 } 00:21:33.881 } 00:21:33.881 ] 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@53 -- # mktemp 00:21:33.881 02:41:07 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xowRyUQKkq 00:21:33.881 02:41:07 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:33.881 02:41:07 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xowRyUQKkq 00:21:33.881 02:41:07 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 [2024-04-27 02:41:07.474983] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.881 [2024-04-27 02:41:07.475093] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xowRyUQKkq 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 [2024-04-27 02:41:07.487009] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:33.881 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.881 02:41:07 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xowRyUQKkq 00:21:33.881 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.881 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 [2024-04-27 02:41:07.499043] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.881 [2024-04-27 02:41:07.499080] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.143 nvme0n1 00:21:34.143 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.143 02:41:07 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:34.143 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.143 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:34.143 [ 00:21:34.143 { 00:21:34.143 "name": "nvme0n1", 00:21:34.143 "aliases": [ 00:21:34.143 "d613026b-7d73-4737-ae76-73715be562d7" 00:21:34.143 ], 00:21:34.143 "product_name": "NVMe disk", 00:21:34.143 "block_size": 512, 00:21:34.143 "num_blocks": 2097152, 00:21:34.143 "uuid": "d613026b-7d73-4737-ae76-73715be562d7", 00:21:34.143 "assigned_rate_limits": { 00:21:34.143 "rw_ios_per_sec": 0, 00:21:34.143 "rw_mbytes_per_sec": 0, 00:21:34.143 "r_mbytes_per_sec": 0, 00:21:34.143 "w_mbytes_per_sec": 0 00:21:34.143 }, 00:21:34.143 "claimed": false, 00:21:34.143 "zoned": false, 00:21:34.143 "supported_io_types": { 00:21:34.143 "read": true, 00:21:34.143 "write": true, 00:21:34.143 "unmap": false, 00:21:34.143 "write_zeroes": true, 00:21:34.143 "flush": true, 00:21:34.143 "reset": true, 00:21:34.143 "compare": true, 00:21:34.143 "compare_and_write": true, 00:21:34.143 "abort": true, 00:21:34.143 "nvme_admin": true, 00:21:34.143 "nvme_io": true 00:21:34.143 }, 00:21:34.143 "memory_domains": [ 00:21:34.143 { 00:21:34.143 "dma_device_id": "system", 00:21:34.143 "dma_device_type": 1 00:21:34.143 } 00:21:34.143 ], 00:21:34.143 "driver_specific": { 00:21:34.143 "nvme": [ 00:21:34.143 { 00:21:34.143 "trid": { 00:21:34.143 "trtype": "TCP", 00:21:34.143 "adrfam": "IPv4", 00:21:34.143 "traddr": "10.0.0.2", 00:21:34.143 "trsvcid": "4421", 00:21:34.143 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:34.143 }, 00:21:34.143 "ctrlr_data": { 00:21:34.143 "cntlid": 3, 00:21:34.143 "vendor_id": "0x8086", 00:21:34.143 "model_number": "SPDK bdev Controller", 00:21:34.143 "serial_number": "00000000000000000000", 00:21:34.143 "firmware_revision": "24.05", 00:21:34.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.143 "oacs": { 00:21:34.143 "security": 0, 00:21:34.143 "format": 0, 00:21:34.143 "firmware": 0, 00:21:34.143 "ns_manage": 0 00:21:34.143 }, 00:21:34.143 "multi_ctrlr": true, 00:21:34.143 "ana_reporting": false 00:21:34.143 }, 00:21:34.143 "vs": { 00:21:34.143 "nvme_version": "1.3" 00:21:34.143 }, 00:21:34.143 "ns_data": { 00:21:34.143 "id": 1, 00:21:34.143 "can_share": true 00:21:34.143 } 00:21:34.143 } 00:21:34.143 ], 00:21:34.143 "mp_policy": "active_passive" 00:21:34.143 } 00:21:34.143 } 00:21:34.143 ] 00:21:34.143 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.143 02:41:07 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.143 02:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.143 02:41:07 -- common/autotest_common.sh@10 -- # set +x 00:21:34.143 02:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.143 02:41:07 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.xowRyUQKkq 00:21:34.143 02:41:07 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:34.143 02:41:07 -- host/async_init.sh@78 -- # nvmftestfini 00:21:34.143 02:41:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:34.143 02:41:07 -- nvmf/common.sh@117 -- # sync 00:21:34.143 02:41:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.143 02:41:07 -- nvmf/common.sh@120 -- # set +e 00:21:34.144 02:41:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.144 02:41:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.144 rmmod nvme_tcp 00:21:34.144 rmmod nvme_fabrics 00:21:34.144 rmmod nvme_keyring 00:21:34.144 02:41:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.144 02:41:07 -- nvmf/common.sh@124 -- # set -e 00:21:34.144 02:41:07 -- nvmf/common.sh@125 -- # return 0 00:21:34.144 02:41:07 -- nvmf/common.sh@478 -- # '[' -n 193734 ']' 00:21:34.144 02:41:07 -- nvmf/common.sh@479 -- # killprocess 193734 00:21:34.144 02:41:07 -- common/autotest_common.sh@936 -- # '[' -z 193734 ']' 00:21:34.144 02:41:07 -- common/autotest_common.sh@940 -- # kill -0 193734 00:21:34.144 02:41:07 -- common/autotest_common.sh@941 -- # uname 00:21:34.144 02:41:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:34.144 02:41:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 193734 00:21:34.144 02:41:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:34.144 02:41:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:34.144 02:41:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 193734' 00:21:34.144 killing process with pid 193734 00:21:34.144 02:41:07 -- common/autotest_common.sh@955 -- # kill 193734 00:21:34.144 [2024-04-27 02:41:07.736645] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.144 [2024-04-27 02:41:07.736672] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:34.144 02:41:07 -- common/autotest_common.sh@960 -- # wait 193734 00:21:34.406 02:41:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:34.406 02:41:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:34.406 02:41:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:34.406 02:41:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.406 02:41:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.406 02:41:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.406 02:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.406 02:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.334 02:41:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.334 00:21:36.334 real 0m10.920s 00:21:36.334 user 0m3.950s 00:21:36.334 sys 0m5.454s 00:21:36.334 02:41:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.334 02:41:09 -- common/autotest_common.sh@10 -- # set +x 00:21:36.334 ************************************ 00:21:36.334 END TEST nvmf_async_init 00:21:36.334 ************************************ 00:21:36.595 02:41:09 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:36.595 02:41:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:36.595 02:41:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.595 02:41:09 -- common/autotest_common.sh@10 -- # set +x 00:21:36.595 ************************************ 00:21:36.595 START TEST dma 00:21:36.595 ************************************ 00:21:36.595 02:41:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:36.857 * Looking for test storage... 00:21:36.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.857 02:41:10 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.857 02:41:10 -- nvmf/common.sh@7 -- # uname -s 00:21:36.857 02:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.857 02:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.857 02:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.857 02:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.857 02:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.857 02:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.857 02:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.857 02:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.857 02:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.857 02:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.857 02:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.857 02:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.857 02:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.857 02:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.857 02:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.857 02:41:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.857 02:41:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.857 02:41:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.857 02:41:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.857 02:41:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.857 02:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.857 02:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.857 02:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.857 02:41:10 -- paths/export.sh@5 -- # export PATH 00:21:36.857 02:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.857 02:41:10 -- nvmf/common.sh@47 -- # : 0 00:21:36.857 02:41:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.857 02:41:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.857 02:41:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.857 02:41:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.857 02:41:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.857 02:41:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.857 02:41:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.857 02:41:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.857 02:41:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:36.857 02:41:10 -- host/dma.sh@13 -- # exit 0 00:21:36.857 00:21:36.857 real 0m0.136s 00:21:36.857 user 0m0.059s 00:21:36.857 sys 0m0.086s 00:21:36.857 02:41:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.857 02:41:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.857 ************************************ 00:21:36.857 END TEST dma 00:21:36.857 ************************************ 00:21:36.857 02:41:10 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:36.857 02:41:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:36.857 02:41:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.857 02:41:10 -- common/autotest_common.sh@10 -- # set +x 00:21:37.119 ************************************ 00:21:37.119 START TEST nvmf_identify 00:21:37.119 ************************************ 00:21:37.119 02:41:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:37.119 * Looking for test storage... 00:21:37.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.119 02:41:10 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.119 02:41:10 -- nvmf/common.sh@7 -- # uname -s 00:21:37.119 02:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.119 02:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.119 02:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.119 02:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.119 02:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.119 02:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.119 02:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.119 02:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.119 02:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.119 02:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.119 02:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.119 02:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.119 02:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.119 02:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.119 02:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.119 02:41:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.119 02:41:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.119 02:41:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.119 02:41:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.119 02:41:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.119 02:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.119 02:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.119 02:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.119 02:41:10 -- paths/export.sh@5 -- # export PATH 00:21:37.119 02:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.119 02:41:10 -- nvmf/common.sh@47 -- # : 0 00:21:37.119 02:41:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.119 02:41:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.119 02:41:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.119 02:41:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.119 02:41:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.119 02:41:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.119 02:41:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.119 02:41:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.119 02:41:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.119 02:41:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.119 02:41:10 -- host/identify.sh@14 -- # nvmftestinit 00:21:37.119 02:41:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:37.119 02:41:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.119 02:41:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:37.119 02:41:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:37.119 02:41:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:37.119 02:41:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.119 02:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.119 02:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.119 02:41:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:37.119 02:41:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:37.119 02:41:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.119 02:41:10 -- common/autotest_common.sh@10 -- # set +x 00:21:43.710 02:41:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:43.710 02:41:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.710 02:41:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.710 02:41:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.710 02:41:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.710 02:41:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.710 02:41:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.710 02:41:17 -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.710 02:41:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.710 02:41:17 -- nvmf/common.sh@296 -- # e810=() 00:21:43.710 02:41:17 -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.710 02:41:17 -- nvmf/common.sh@297 -- # x722=() 00:21:43.710 02:41:17 -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.710 02:41:17 -- nvmf/common.sh@298 -- # mlx=() 00:21:43.710 02:41:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.710 02:41:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.710 02:41:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.710 02:41:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.710 02:41:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.710 02:41:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.710 02:41:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:43.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:43.710 02:41:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.710 02:41:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:43.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:43.710 02:41:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.710 02:41:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.710 02:41:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.710 02:41:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:43.710 02:41:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.710 02:41:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:43.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:43.710 02:41:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.710 02:41:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.710 02:41:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.710 02:41:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:43.710 02:41:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.710 02:41:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:43.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:43.710 02:41:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.710 02:41:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:43.710 02:41:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:43.710 02:41:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:43.710 02:41:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:43.710 02:41:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.711 02:41:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.711 02:41:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.711 02:41:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.711 02:41:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.711 02:41:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.711 02:41:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.711 02:41:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.711 02:41:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.711 02:41:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.711 02:41:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.711 02:41:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.711 02:41:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.711 02:41:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.972 02:41:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.972 02:41:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.972 02:41:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.972 02:41:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.972 02:41:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.972 02:41:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:21:43.972 00:21:43.972 --- 10.0.0.2 ping statistics --- 00:21:43.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.972 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:21:43.972 02:41:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:21:43.972 00:21:43.972 --- 10.0.0.1 ping statistics --- 00:21:43.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.972 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:21:43.972 02:41:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.972 02:41:17 -- nvmf/common.sh@411 -- # return 0 00:21:43.972 02:41:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:43.972 02:41:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.972 02:41:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:43.972 02:41:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:43.972 02:41:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.972 02:41:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:43.972 02:41:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:43.972 02:41:17 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:43.972 02:41:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:43.972 02:41:17 -- common/autotest_common.sh@10 -- # set +x 00:21:43.972 02:41:17 -- host/identify.sh@19 -- # nvmfpid=198451 00:21:43.972 02:41:17 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.972 02:41:17 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:43.972 02:41:17 -- host/identify.sh@23 -- # waitforlisten 198451 00:21:43.972 02:41:17 -- common/autotest_common.sh@817 -- # '[' -z 198451 ']' 00:21:43.972 02:41:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.972 02:41:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:43.972 02:41:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.972 02:41:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:43.972 02:41:17 -- common/autotest_common.sh@10 -- # set +x 00:21:44.233 [2024-04-27 02:41:17.593941] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:44.233 [2024-04-27 02:41:17.594005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.233 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.233 [2024-04-27 02:41:17.680838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.233 [2024-04-27 02:41:17.752296] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.233 [2024-04-27 02:41:17.752329] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.233 [2024-04-27 02:41:17.752335] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.233 [2024-04-27 02:41:17.752339] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.233 [2024-04-27 02:41:17.752344] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.233 [2024-04-27 02:41:17.752461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.233 [2024-04-27 02:41:17.752602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.233 [2024-04-27 02:41:17.752628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.233 [2024-04-27 02:41:17.752628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.806 02:41:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:44.806 02:41:18 -- common/autotest_common.sh@850 -- # return 0 00:21:44.806 02:41:18 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.806 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.806 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:44.806 [2024-04-27 02:41:18.425836] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:45.070 02:41:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 02:41:18 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:45.070 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 Malloc0 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.070 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:45.070 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.070 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 [2024-04-27 02:41:18.525487] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:45.070 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:45.070 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.070 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.070 [2024-04-27 02:41:18.545309] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:45.070 [ 00:21:45.070 { 00:21:45.070 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:45.070 "subtype": "Discovery", 00:21:45.070 "listen_addresses": [ 00:21:45.070 { 00:21:45.070 "transport": "TCP", 00:21:45.070 "trtype": "TCP", 00:21:45.070 "adrfam": "IPv4", 00:21:45.070 "traddr": "10.0.0.2", 00:21:45.070 "trsvcid": "4420" 00:21:45.070 } 00:21:45.070 ], 00:21:45.070 "allow_any_host": true, 00:21:45.070 "hosts": [] 00:21:45.070 }, 00:21:45.070 { 00:21:45.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.070 "subtype": "NVMe", 00:21:45.070 "listen_addresses": [ 00:21:45.070 { 00:21:45.070 "transport": "TCP", 00:21:45.070 "trtype": "TCP", 00:21:45.070 "adrfam": "IPv4", 00:21:45.070 "traddr": "10.0.0.2", 00:21:45.070 "trsvcid": "4420" 00:21:45.070 } 00:21:45.070 ], 00:21:45.070 "allow_any_host": true, 00:21:45.070 "hosts": [], 00:21:45.070 "serial_number": "SPDK00000000000001", 00:21:45.070 "model_number": "SPDK bdev Controller", 00:21:45.070 "max_namespaces": 32, 00:21:45.070 "min_cntlid": 1, 00:21:45.070 "max_cntlid": 65519, 00:21:45.070 "namespaces": [ 00:21:45.070 { 00:21:45.070 "nsid": 1, 00:21:45.070 "bdev_name": "Malloc0", 00:21:45.070 "name": "Malloc0", 00:21:45.070 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:45.070 "eui64": "ABCDEF0123456789", 00:21:45.070 "uuid": "3afb3eeb-1f07-4b34-b839-75f0fb76acdf" 00:21:45.070 } 00:21:45.070 ] 00:21:45.070 } 00:21:45.070 ] 00:21:45.070 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.070 02:41:18 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:45.070 [2024-04-27 02:41:18.582662] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:45.070 [2024-04-27 02:41:18.582703] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198609 ] 00:21:45.070 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.070 [2024-04-27 02:41:18.613967] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:45.070 [2024-04-27 02:41:18.614009] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:45.070 [2024-04-27 02:41:18.614014] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:45.070 [2024-04-27 02:41:18.614025] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:45.070 [2024-04-27 02:41:18.614033] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:45.070 [2024-04-27 02:41:18.617313] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:45.070 [2024-04-27 02:41:18.617345] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1521d10 0 00:21:45.070 [2024-04-27 02:41:18.617690] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:45.070 [2024-04-27 02:41:18.617701] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:45.070 [2024-04-27 02:41:18.617705] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:45.070 [2024-04-27 02:41:18.617709] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:45.070 [2024-04-27 02:41:18.617744] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.070 [2024-04-27 02:41:18.617750] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.070 [2024-04-27 02:41:18.617754] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.070 [2024-04-27 02:41:18.617768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:45.070 [2024-04-27 02:41:18.617785] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.070 [2024-04-27 02:41:18.625287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.070 [2024-04-27 02:41:18.625297] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.070 [2024-04-27 02:41:18.625301] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.070 [2024-04-27 02:41:18.625305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.070 [2024-04-27 02:41:18.625316] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:45.070 [2024-04-27 02:41:18.625322] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:45.070 [2024-04-27 02:41:18.625332] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:45.070 [2024-04-27 02:41:18.625345] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.070 [2024-04-27 02:41:18.625349] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.070 [2024-04-27 02:41:18.625352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.070 [2024-04-27 02:41:18.625360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.070 [2024-04-27 02:41:18.625374] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.070 [2024-04-27 02:41:18.625627] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.070 [2024-04-27 02:41:18.625634] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.070 [2024-04-27 02:41:18.625638] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.625642] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.625648] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:45.071 [2024-04-27 02:41:18.625656] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:45.071 [2024-04-27 02:41:18.625663] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.625667] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.625670] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.625677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.071 [2024-04-27 02:41:18.625689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.625867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.625874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.625878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.625881] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.625887] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:45.071 [2024-04-27 02:41:18.625895] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:45.071 [2024-04-27 02:41:18.625902] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.625906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.625909] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.625916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.071 [2024-04-27 02:41:18.625927] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.626220] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.626226] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.626230] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626233] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.626239] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:45.071 [2024-04-27 02:41:18.626252] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626256] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626259] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.626266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.071 [2024-04-27 02:41:18.626284] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.626518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.626525] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.626529] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.626538] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:45.071 [2024-04-27 02:41:18.626543] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:45.071 [2024-04-27 02:41:18.626551] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:45.071 [2024-04-27 02:41:18.626656] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:45.071 [2024-04-27 02:41:18.626661] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:45.071 [2024-04-27 02:41:18.626670] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626673] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626677] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.626684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.071 [2024-04-27 02:41:18.626695] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.626983] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.626989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.626992] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.626996] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.627002] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:45.071 [2024-04-27 02:41:18.627011] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627015] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627019] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.627026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.071 [2024-04-27 02:41:18.627036] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.627301] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.627308] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.627312] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.627324] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:45.071 [2024-04-27 02:41:18.627328] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:45.071 [2024-04-27 02:41:18.627336] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:45.071 [2024-04-27 02:41:18.627345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:45.071 [2024-04-27 02:41:18.627355] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627359] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.627366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.071 [2024-04-27 02:41:18.627377] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.627642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.071 [2024-04-27 02:41:18.627648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.071 [2024-04-27 02:41:18.627652] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627656] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1521d10): datao=0, datal=4096, cccid=0 00:21:45.071 [2024-04-27 02:41:18.627661] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1589a60) on tqpair(0x1521d10): expected_datao=0, payload_size=4096 00:21:45.071 [2024-04-27 02:41:18.627665] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627739] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.627744] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.628037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.628043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.628047] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.628050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.071 [2024-04-27 02:41:18.628059] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:45.071 [2024-04-27 02:41:18.628064] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:45.071 [2024-04-27 02:41:18.628069] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:45.071 [2024-04-27 02:41:18.628073] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:45.071 [2024-04-27 02:41:18.628078] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:45.071 [2024-04-27 02:41:18.628083] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:45.071 [2024-04-27 02:41:18.628092] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:45.071 [2024-04-27 02:41:18.628099] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.628102] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.071 [2024-04-27 02:41:18.628106] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.071 [2024-04-27 02:41:18.628114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.071 [2024-04-27 02:41:18.628126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.071 [2024-04-27 02:41:18.628366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.071 [2024-04-27 02:41:18.628373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.071 [2024-04-27 02:41:18.628377] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628381] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589a60) on tqpair=0x1521d10 00:21:45.072 [2024-04-27 02:41:18.628389] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628397] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.628403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.072 [2024-04-27 02:41:18.628410] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628413] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.628423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.072 [2024-04-27 02:41:18.628429] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628433] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628436] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.628442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.072 [2024-04-27 02:41:18.628448] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628452] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628455] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.628461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.072 [2024-04-27 02:41:18.628466] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:45.072 [2024-04-27 02:41:18.628478] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:45.072 [2024-04-27 02:41:18.628484] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628488] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.628495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.072 [2024-04-27 02:41:18.628508] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589a60, cid 0, qid 0 00:21:45.072 [2024-04-27 02:41:18.628514] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589bc0, cid 1, qid 0 00:21:45.072 [2024-04-27 02:41:18.628518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589d20, cid 2, qid 0 00:21:45.072 [2024-04-27 02:41:18.628523] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589e80, cid 3, qid 0 00:21:45.072 [2024-04-27 02:41:18.628528] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589fe0, cid 4, qid 0 00:21:45.072 [2024-04-27 02:41:18.628722] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.072 [2024-04-27 02:41:18.628729] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.072 [2024-04-27 02:41:18.628733] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628736] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589fe0) on tqpair=0x1521d10 00:21:45.072 [2024-04-27 02:41:18.628745] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:45.072 [2024-04-27 02:41:18.628751] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:45.072 [2024-04-27 02:41:18.628762] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628766] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.628773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.072 [2024-04-27 02:41:18.628784] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589fe0, cid 4, qid 0 00:21:45.072 [2024-04-27 02:41:18.628931] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.072 [2024-04-27 02:41:18.628938] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.072 [2024-04-27 02:41:18.628941] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.628945] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1521d10): datao=0, datal=4096, cccid=4 00:21:45.072 [2024-04-27 02:41:18.628950] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1589fe0) on tqpair(0x1521d10): expected_datao=0, payload_size=4096 00:21:45.072 [2024-04-27 02:41:18.628954] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.629119] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.629123] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673286] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.072 [2024-04-27 02:41:18.673295] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.072 [2024-04-27 02:41:18.673299] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673303] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589fe0) on tqpair=0x1521d10 00:21:45.072 [2024-04-27 02:41:18.673315] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:45.072 [2024-04-27 02:41:18.673333] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673337] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.673344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.072 [2024-04-27 02:41:18.673352] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673355] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673359] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1521d10) 00:21:45.072 [2024-04-27 02:41:18.673365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.072 [2024-04-27 02:41:18.673381] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589fe0, cid 4, qid 0 00:21:45.072 [2024-04-27 02:41:18.673387] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x158a140, cid 5, qid 0 00:21:45.072 [2024-04-27 02:41:18.673642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.072 [2024-04-27 02:41:18.673649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.072 [2024-04-27 02:41:18.673652] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673656] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1521d10): datao=0, datal=1024, cccid=4 00:21:45.072 [2024-04-27 02:41:18.673661] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1589fe0) on tqpair(0x1521d10): expected_datao=0, payload_size=1024 00:21:45.072 [2024-04-27 02:41:18.673665] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673675] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673679] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673685] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.072 [2024-04-27 02:41:18.673690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.072 [2024-04-27 02:41:18.673694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.072 [2024-04-27 02:41:18.673697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x158a140) on tqpair=0x1521d10 00:21:45.337 [2024-04-27 02:41:18.714649] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.337 [2024-04-27 02:41:18.714659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.337 [2024-04-27 02:41:18.714662] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.337 [2024-04-27 02:41:18.714666] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589fe0) on tqpair=0x1521d10 00:21:45.337 [2024-04-27 02:41:18.714679] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.337 [2024-04-27 02:41:18.714682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1521d10) 00:21:45.337 [2024-04-27 02:41:18.714689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.337 [2024-04-27 02:41:18.714705] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589fe0, cid 4, qid 0 00:21:45.337 [2024-04-27 02:41:18.714989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.337 [2024-04-27 02:41:18.714997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.338 [2024-04-27 02:41:18.715000] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715004] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1521d10): datao=0, datal=3072, cccid=4 00:21:45.338 [2024-04-27 02:41:18.715008] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1589fe0) on tqpair(0x1521d10): expected_datao=0, payload_size=3072 00:21:45.338 [2024-04-27 02:41:18.715013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715019] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715023] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.338 [2024-04-27 02:41:18.715134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.338 [2024-04-27 02:41:18.715137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715141] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589fe0) on tqpair=0x1521d10 00:21:45.338 [2024-04-27 02:41:18.715151] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715155] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1521d10) 00:21:45.338 [2024-04-27 02:41:18.715162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.338 [2024-04-27 02:41:18.715176] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589fe0, cid 4, qid 0 00:21:45.338 [2024-04-27 02:41:18.715438] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.338 [2024-04-27 02:41:18.715445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.338 [2024-04-27 02:41:18.715449] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715452] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1521d10): datao=0, datal=8, cccid=4 00:21:45.338 [2024-04-27 02:41:18.715457] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1589fe0) on tqpair(0x1521d10): expected_datao=0, payload_size=8 00:21:45.338 [2024-04-27 02:41:18.715461] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715471] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.715474] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.756482] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.338 [2024-04-27 02:41:18.756495] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.338 [2024-04-27 02:41:18.756499] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.338 [2024-04-27 02:41:18.756503] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589fe0) on tqpair=0x1521d10 00:21:45.338 ===================================================== 00:21:45.338 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:45.338 ===================================================== 00:21:45.338 Controller Capabilities/Features 00:21:45.338 ================================ 00:21:45.338 Vendor ID: 0000 00:21:45.338 Subsystem Vendor ID: 0000 00:21:45.338 Serial Number: .................... 00:21:45.338 Model Number: ........................................ 00:21:45.338 Firmware Version: 24.05 00:21:45.338 Recommended Arb Burst: 0 00:21:45.338 IEEE OUI Identifier: 00 00 00 00:21:45.338 Multi-path I/O 00:21:45.338 May have multiple subsystem ports: No 00:21:45.338 May have multiple controllers: No 00:21:45.338 Associated with SR-IOV VF: No 00:21:45.338 Max Data Transfer Size: 131072 00:21:45.338 Max Number of Namespaces: 0 00:21:45.338 Max Number of I/O Queues: 1024 00:21:45.338 NVMe Specification Version (VS): 1.3 00:21:45.338 NVMe Specification Version (Identify): 1.3 00:21:45.338 Maximum Queue Entries: 128 00:21:45.338 Contiguous Queues Required: Yes 00:21:45.338 Arbitration Mechanisms Supported 00:21:45.338 Weighted Round Robin: Not Supported 00:21:45.338 Vendor Specific: Not Supported 00:21:45.338 Reset Timeout: 15000 ms 00:21:45.338 Doorbell Stride: 4 bytes 00:21:45.338 NVM Subsystem Reset: Not Supported 00:21:45.338 Command Sets Supported 00:21:45.338 NVM Command Set: Supported 00:21:45.338 Boot Partition: Not Supported 00:21:45.338 Memory Page Size Minimum: 4096 bytes 00:21:45.338 Memory Page Size Maximum: 4096 bytes 00:21:45.338 Persistent Memory Region: Not Supported 00:21:45.338 Optional Asynchronous Events Supported 00:21:45.338 Namespace Attribute Notices: Not Supported 00:21:45.338 Firmware Activation Notices: Not Supported 00:21:45.338 ANA Change Notices: Not Supported 00:21:45.338 PLE Aggregate Log Change Notices: Not Supported 00:21:45.338 LBA Status Info Alert Notices: Not Supported 00:21:45.338 EGE Aggregate Log Change Notices: Not Supported 00:21:45.338 Normal NVM Subsystem Shutdown event: Not Supported 00:21:45.338 Zone Descriptor Change Notices: Not Supported 00:21:45.338 Discovery Log Change Notices: Supported 00:21:45.338 Controller Attributes 00:21:45.338 128-bit Host Identifier: Not Supported 00:21:45.338 Non-Operational Permissive Mode: Not Supported 00:21:45.338 NVM Sets: Not Supported 00:21:45.338 Read Recovery Levels: Not Supported 00:21:45.338 Endurance Groups: Not Supported 00:21:45.338 Predictable Latency Mode: Not Supported 00:21:45.338 Traffic Based Keep ALive: Not Supported 00:21:45.338 Namespace Granularity: Not Supported 00:21:45.338 SQ Associations: Not Supported 00:21:45.338 UUID List: Not Supported 00:21:45.338 Multi-Domain Subsystem: Not Supported 00:21:45.338 Fixed Capacity Management: Not Supported 00:21:45.338 Variable Capacity Management: Not Supported 00:21:45.338 Delete Endurance Group: Not Supported 00:21:45.338 Delete NVM Set: Not Supported 00:21:45.338 Extended LBA Formats Supported: Not Supported 00:21:45.338 Flexible Data Placement Supported: Not Supported 00:21:45.338 00:21:45.338 Controller Memory Buffer Support 00:21:45.338 ================================ 00:21:45.338 Supported: No 00:21:45.338 00:21:45.338 Persistent Memory Region Support 00:21:45.338 ================================ 00:21:45.338 Supported: No 00:21:45.338 00:21:45.338 Admin Command Set Attributes 00:21:45.338 ============================ 00:21:45.338 Security Send/Receive: Not Supported 00:21:45.338 Format NVM: Not Supported 00:21:45.338 Firmware Activate/Download: Not Supported 00:21:45.338 Namespace Management: Not Supported 00:21:45.338 Device Self-Test: Not Supported 00:21:45.338 Directives: Not Supported 00:21:45.338 NVMe-MI: Not Supported 00:21:45.338 Virtualization Management: Not Supported 00:21:45.338 Doorbell Buffer Config: Not Supported 00:21:45.338 Get LBA Status Capability: Not Supported 00:21:45.338 Command & Feature Lockdown Capability: Not Supported 00:21:45.338 Abort Command Limit: 1 00:21:45.338 Async Event Request Limit: 4 00:21:45.338 Number of Firmware Slots: N/A 00:21:45.338 Firmware Slot 1 Read-Only: N/A 00:21:45.338 Firmware Activation Without Reset: N/A 00:21:45.338 Multiple Update Detection Support: N/A 00:21:45.338 Firmware Update Granularity: No Information Provided 00:21:45.338 Per-Namespace SMART Log: No 00:21:45.338 Asymmetric Namespace Access Log Page: Not Supported 00:21:45.338 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:45.338 Command Effects Log Page: Not Supported 00:21:45.338 Get Log Page Extended Data: Supported 00:21:45.338 Telemetry Log Pages: Not Supported 00:21:45.338 Persistent Event Log Pages: Not Supported 00:21:45.338 Supported Log Pages Log Page: May Support 00:21:45.338 Commands Supported & Effects Log Page: Not Supported 00:21:45.338 Feature Identifiers & Effects Log Page:May Support 00:21:45.338 NVMe-MI Commands & Effects Log Page: May Support 00:21:45.338 Data Area 4 for Telemetry Log: Not Supported 00:21:45.338 Error Log Page Entries Supported: 128 00:21:45.338 Keep Alive: Not Supported 00:21:45.338 00:21:45.338 NVM Command Set Attributes 00:21:45.338 ========================== 00:21:45.338 Submission Queue Entry Size 00:21:45.338 Max: 1 00:21:45.338 Min: 1 00:21:45.338 Completion Queue Entry Size 00:21:45.338 Max: 1 00:21:45.338 Min: 1 00:21:45.338 Number of Namespaces: 0 00:21:45.338 Compare Command: Not Supported 00:21:45.338 Write Uncorrectable Command: Not Supported 00:21:45.338 Dataset Management Command: Not Supported 00:21:45.338 Write Zeroes Command: Not Supported 00:21:45.338 Set Features Save Field: Not Supported 00:21:45.338 Reservations: Not Supported 00:21:45.338 Timestamp: Not Supported 00:21:45.338 Copy: Not Supported 00:21:45.338 Volatile Write Cache: Not Present 00:21:45.338 Atomic Write Unit (Normal): 1 00:21:45.338 Atomic Write Unit (PFail): 1 00:21:45.338 Atomic Compare & Write Unit: 1 00:21:45.338 Fused Compare & Write: Supported 00:21:45.338 Scatter-Gather List 00:21:45.338 SGL Command Set: Supported 00:21:45.338 SGL Keyed: Supported 00:21:45.338 SGL Bit Bucket Descriptor: Not Supported 00:21:45.338 SGL Metadata Pointer: Not Supported 00:21:45.338 Oversized SGL: Not Supported 00:21:45.338 SGL Metadata Address: Not Supported 00:21:45.338 SGL Offset: Supported 00:21:45.338 Transport SGL Data Block: Not Supported 00:21:45.338 Replay Protected Memory Block: Not Supported 00:21:45.338 00:21:45.338 Firmware Slot Information 00:21:45.338 ========================= 00:21:45.338 Active slot: 0 00:21:45.338 00:21:45.338 00:21:45.338 Error Log 00:21:45.338 ========= 00:21:45.338 00:21:45.338 Active Namespaces 00:21:45.338 ================= 00:21:45.338 Discovery Log Page 00:21:45.338 ================== 00:21:45.338 Generation Counter: 2 00:21:45.338 Number of Records: 2 00:21:45.339 Record Format: 0 00:21:45.339 00:21:45.339 Discovery Log Entry 0 00:21:45.339 ---------------------- 00:21:45.339 Transport Type: 3 (TCP) 00:21:45.339 Address Family: 1 (IPv4) 00:21:45.339 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:45.339 Entry Flags: 00:21:45.339 Duplicate Returned Information: 1 00:21:45.339 Explicit Persistent Connection Support for Discovery: 1 00:21:45.339 Transport Requirements: 00:21:45.339 Secure Channel: Not Required 00:21:45.339 Port ID: 0 (0x0000) 00:21:45.339 Controller ID: 65535 (0xffff) 00:21:45.339 Admin Max SQ Size: 128 00:21:45.339 Transport Service Identifier: 4420 00:21:45.339 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:45.339 Transport Address: 10.0.0.2 00:21:45.339 Discovery Log Entry 1 00:21:45.339 ---------------------- 00:21:45.339 Transport Type: 3 (TCP) 00:21:45.339 Address Family: 1 (IPv4) 00:21:45.339 Subsystem Type: 2 (NVM Subsystem) 00:21:45.339 Entry Flags: 00:21:45.339 Duplicate Returned Information: 0 00:21:45.339 Explicit Persistent Connection Support for Discovery: 0 00:21:45.339 Transport Requirements: 00:21:45.339 Secure Channel: Not Required 00:21:45.339 Port ID: 0 (0x0000) 00:21:45.339 Controller ID: 65535 (0xffff) 00:21:45.339 Admin Max SQ Size: 128 00:21:45.339 Transport Service Identifier: 4420 00:21:45.339 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:45.339 Transport Address: 10.0.0.2 [2024-04-27 02:41:18.756592] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:45.339 [2024-04-27 02:41:18.756606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.339 [2024-04-27 02:41:18.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.339 [2024-04-27 02:41:18.756619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.339 [2024-04-27 02:41:18.756625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.339 [2024-04-27 02:41:18.756634] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.756637] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.756641] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1521d10) 00:21:45.339 [2024-04-27 02:41:18.756648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-04-27 02:41:18.756663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589e80, cid 3, qid 0 00:21:45.339 [2024-04-27 02:41:18.756792] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.339 [2024-04-27 02:41:18.756799] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.339 [2024-04-27 02:41:18.756802] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.756806] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589e80) on tqpair=0x1521d10 00:21:45.339 [2024-04-27 02:41:18.756813] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.756817] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.756820] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1521d10) 00:21:45.339 [2024-04-27 02:41:18.756827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-04-27 02:41:18.756841] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589e80, cid 3, qid 0 00:21:45.339 [2024-04-27 02:41:18.757027] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.339 [2024-04-27 02:41:18.757033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.339 [2024-04-27 02:41:18.757036] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.757040] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589e80) on tqpair=0x1521d10 00:21:45.339 [2024-04-27 02:41:18.757046] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:45.339 [2024-04-27 02:41:18.757050] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:45.339 [2024-04-27 02:41:18.757060] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.757063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.757067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1521d10) 00:21:45.339 [2024-04-27 02:41:18.757074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-04-27 02:41:18.757087] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589e80, cid 3, qid 0 00:21:45.339 [2024-04-27 02:41:18.757272] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.339 [2024-04-27 02:41:18.757847] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.339 [2024-04-27 02:41:18.757854] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.757858] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589e80) on tqpair=0x1521d10 00:21:45.339 [2024-04-27 02:41:18.761291] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.761297] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.761301] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1521d10) 00:21:45.339 [2024-04-27 02:41:18.761308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-04-27 02:41:18.761322] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1589e80, cid 3, qid 0 00:21:45.339 [2024-04-27 02:41:18.761609] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.339 [2024-04-27 02:41:18.761616] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.339 [2024-04-27 02:41:18.761620] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.761623] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1589e80) on tqpair=0x1521d10 00:21:45.339 [2024-04-27 02:41:18.761632] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:45.339 00:21:45.339 02:41:18 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:45.339 [2024-04-27 02:41:18.802163] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:45.339 [2024-04-27 02:41:18.802213] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198711 ] 00:21:45.339 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.339 [2024-04-27 02:41:18.841358] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:45.339 [2024-04-27 02:41:18.841402] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:45.339 [2024-04-27 02:41:18.841407] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:45.339 [2024-04-27 02:41:18.841418] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:45.339 [2024-04-27 02:41:18.841425] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:45.339 [2024-04-27 02:41:18.841957] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:45.339 [2024-04-27 02:41:18.841985] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19d3d10 0 00:21:45.339 [2024-04-27 02:41:18.848288] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:45.339 [2024-04-27 02:41:18.848303] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:45.339 [2024-04-27 02:41:18.848307] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:45.339 [2024-04-27 02:41:18.848310] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:45.339 [2024-04-27 02:41:18.848343] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.848348] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.848355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.339 [2024-04-27 02:41:18.848368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:45.339 [2024-04-27 02:41:18.848385] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.339 [2024-04-27 02:41:18.856289] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.339 [2024-04-27 02:41:18.856297] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.339 [2024-04-27 02:41:18.856301] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.856305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.339 [2024-04-27 02:41:18.856317] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:45.339 [2024-04-27 02:41:18.856324] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:45.339 [2024-04-27 02:41:18.856329] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:45.339 [2024-04-27 02:41:18.856341] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.856345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.339 [2024-04-27 02:41:18.856348] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.339 [2024-04-27 02:41:18.856356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-04-27 02:41:18.856368] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.856624] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.856632] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.856635] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.856639] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.856645] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:45.340 [2024-04-27 02:41:18.856653] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:45.340 [2024-04-27 02:41:18.856660] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.856664] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.856667] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.856675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.340 [2024-04-27 02:41:18.856686] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.857021] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.857027] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.857030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.857039] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:45.340 [2024-04-27 02:41:18.857048] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:45.340 [2024-04-27 02:41:18.857055] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857058] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857062] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.857071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.340 [2024-04-27 02:41:18.857081] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.857298] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.857306] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.857309] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857313] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.857319] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:45.340 [2024-04-27 02:41:18.857328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857332] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857335] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.857342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.340 [2024-04-27 02:41:18.857353] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.857569] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.857575] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.857578] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857582] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.857587] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:45.340 [2024-04-27 02:41:18.857592] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:45.340 [2024-04-27 02:41:18.857600] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:45.340 [2024-04-27 02:41:18.857705] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:45.340 [2024-04-27 02:41:18.857709] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:45.340 [2024-04-27 02:41:18.857717] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857720] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857724] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.857731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.340 [2024-04-27 02:41:18.857741] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.857977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.857983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.857987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.857990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.857996] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:45.340 [2024-04-27 02:41:18.858006] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858010] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858016] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.858023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.340 [2024-04-27 02:41:18.858034] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.858380] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.858386] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.858390] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858393] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.858398] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:45.340 [2024-04-27 02:41:18.858403] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:45.340 [2024-04-27 02:41:18.858411] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:45.340 [2024-04-27 02:41:18.858419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:45.340 [2024-04-27 02:41:18.858429] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858434] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.858441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.340 [2024-04-27 02:41:18.858451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.858781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.340 [2024-04-27 02:41:18.858789] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.340 [2024-04-27 02:41:18.858792] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858796] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=4096, cccid=0 00:21:45.340 [2024-04-27 02:41:18.858801] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3ba60) on tqpair(0x19d3d10): expected_datao=0, payload_size=4096 00:21:45.340 [2024-04-27 02:41:18.858805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858813] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.858816] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.899533] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.899536] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899540] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.899549] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:45.340 [2024-04-27 02:41:18.899554] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:45.340 [2024-04-27 02:41:18.899558] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:45.340 [2024-04-27 02:41:18.899562] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:45.340 [2024-04-27 02:41:18.899567] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:45.340 [2024-04-27 02:41:18.899571] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:45.340 [2024-04-27 02:41:18.899583] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:45.340 [2024-04-27 02:41:18.899590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899594] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.899606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.340 [2024-04-27 02:41:18.899618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.340 [2024-04-27 02:41:18.899931] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.340 [2024-04-27 02:41:18.899937] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.340 [2024-04-27 02:41:18.899940] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899944] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3ba60) on tqpair=0x19d3d10 00:21:45.340 [2024-04-27 02:41:18.899951] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899955] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.340 [2024-04-27 02:41:18.899958] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19d3d10) 00:21:45.340 [2024-04-27 02:41:18.899964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.341 [2024-04-27 02:41:18.899971] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.899974] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.899978] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.899983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.341 [2024-04-27 02:41:18.899989] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.899993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.899996] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.900002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.341 [2024-04-27 02:41:18.900008] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.900011] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.900015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.900020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.341 [2024-04-27 02:41:18.900025] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.900036] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.900043] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.900046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.900053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.341 [2024-04-27 02:41:18.900064] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3ba60, cid 0, qid 0 00:21:45.341 [2024-04-27 02:41:18.900069] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bbc0, cid 1, qid 0 00:21:45.341 [2024-04-27 02:41:18.900076] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bd20, cid 2, qid 0 00:21:45.341 [2024-04-27 02:41:18.900080] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.341 [2024-04-27 02:41:18.900085] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.341 [2024-04-27 02:41:18.904285] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.341 [2024-04-27 02:41:18.904296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.341 [2024-04-27 02:41:18.904299] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904303] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.341 [2024-04-27 02:41:18.904309] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:45.341 [2024-04-27 02:41:18.904314] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.904326] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.904332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.904339] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904343] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904346] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.904353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.341 [2024-04-27 02:41:18.904367] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.341 [2024-04-27 02:41:18.904615] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.341 [2024-04-27 02:41:18.904622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.341 [2024-04-27 02:41:18.904625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.341 [2024-04-27 02:41:18.904679] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.904689] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.904696] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904700] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.904707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.341 [2024-04-27 02:41:18.904718] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.341 [2024-04-27 02:41:18.904953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.341 [2024-04-27 02:41:18.904960] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.341 [2024-04-27 02:41:18.904964] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904967] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=4096, cccid=4 00:21:45.341 [2024-04-27 02:41:18.904972] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3bfe0) on tqpair(0x19d3d10): expected_datao=0, payload_size=4096 00:21:45.341 [2024-04-27 02:41:18.904976] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904983] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.904989] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905171] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.341 [2024-04-27 02:41:18.905178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.341 [2024-04-27 02:41:18.905181] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905185] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.341 [2024-04-27 02:41:18.905194] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:45.341 [2024-04-27 02:41:18.905208] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905217] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905224] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905227] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.905234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.341 [2024-04-27 02:41:18.905246] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.341 [2024-04-27 02:41:18.905377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.341 [2024-04-27 02:41:18.905385] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.341 [2024-04-27 02:41:18.905388] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905392] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=4096, cccid=4 00:21:45.341 [2024-04-27 02:41:18.905396] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3bfe0) on tqpair(0x19d3d10): expected_datao=0, payload_size=4096 00:21:45.341 [2024-04-27 02:41:18.905401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905407] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905411] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905553] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.341 [2024-04-27 02:41:18.905559] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.341 [2024-04-27 02:41:18.905562] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905566] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.341 [2024-04-27 02:41:18.905580] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905589] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905600] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.341 [2024-04-27 02:41:18.905607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.341 [2024-04-27 02:41:18.905619] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.341 [2024-04-27 02:41:18.905741] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.341 [2024-04-27 02:41:18.905748] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.341 [2024-04-27 02:41:18.905751] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905755] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=4096, cccid=4 00:21:45.341 [2024-04-27 02:41:18.905764] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3bfe0) on tqpair(0x19d3d10): expected_datao=0, payload_size=4096 00:21:45.341 [2024-04-27 02:41:18.905769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905775] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905779] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905924] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.341 [2024-04-27 02:41:18.905931] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.341 [2024-04-27 02:41:18.905934] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.341 [2024-04-27 02:41:18.905938] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.341 [2024-04-27 02:41:18.905947] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905955] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905964] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905970] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:45.341 [2024-04-27 02:41:18.905975] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:45.342 [2024-04-27 02:41:18.905980] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:45.342 [2024-04-27 02:41:18.905985] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:45.342 [2024-04-27 02:41:18.905990] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:45.342 [2024-04-27 02:41:18.906004] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906007] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.906014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.906021] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906024] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.906034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.342 [2024-04-27 02:41:18.906048] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.342 [2024-04-27 02:41:18.906053] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c140, cid 5, qid 0 00:21:45.342 [2024-04-27 02:41:18.906304] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.342 [2024-04-27 02:41:18.906311] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.342 [2024-04-27 02:41:18.906315] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906318] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.342 [2024-04-27 02:41:18.906326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.342 [2024-04-27 02:41:18.906332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.342 [2024-04-27 02:41:18.906335] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906339] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c140) on tqpair=0x19d3d10 00:21:45.342 [2024-04-27 02:41:18.906352] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906356] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.906362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.906373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c140, cid 5, qid 0 00:21:45.342 [2024-04-27 02:41:18.906611] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.342 [2024-04-27 02:41:18.906617] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.342 [2024-04-27 02:41:18.906621] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906624] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c140) on tqpair=0x19d3d10 00:21:45.342 [2024-04-27 02:41:18.906634] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906638] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.906644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.906654] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c140, cid 5, qid 0 00:21:45.342 [2024-04-27 02:41:18.906866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.342 [2024-04-27 02:41:18.906872] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.342 [2024-04-27 02:41:18.906876] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906879] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c140) on tqpair=0x19d3d10 00:21:45.342 [2024-04-27 02:41:18.906889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.906893] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.906899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.906909] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c140, cid 5, qid 0 00:21:45.342 [2024-04-27 02:41:18.907105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.342 [2024-04-27 02:41:18.907111] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.342 [2024-04-27 02:41:18.907115] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.907118] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c140) on tqpair=0x19d3d10 00:21:45.342 [2024-04-27 02:41:18.907130] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.907134] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.907141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.907148] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.907151] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.907158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.907165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.907168] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.907174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.907182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.907188] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19d3d10) 00:21:45.342 [2024-04-27 02:41:18.907194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.342 [2024-04-27 02:41:18.907205] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c140, cid 5, qid 0 00:21:45.342 [2024-04-27 02:41:18.907210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3bfe0, cid 4, qid 0 00:21:45.342 [2024-04-27 02:41:18.907215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c2a0, cid 6, qid 0 00:21:45.342 [2024-04-27 02:41:18.907220] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c400, cid 7, qid 0 00:21:45.342 [2024-04-27 02:41:18.911287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.342 [2024-04-27 02:41:18.911295] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.342 [2024-04-27 02:41:18.911298] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.342 [2024-04-27 02:41:18.911302] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=8192, cccid=5 00:21:45.342 [2024-04-27 02:41:18.911306] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3c140) on tqpair(0x19d3d10): expected_datao=0, payload_size=8192 00:21:45.343 [2024-04-27 02:41:18.911310] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911317] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911320] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.343 [2024-04-27 02:41:18.911332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.343 [2024-04-27 02:41:18.911335] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911338] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=512, cccid=4 00:21:45.343 [2024-04-27 02:41:18.911343] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3bfe0) on tqpair(0x19d3d10): expected_datao=0, payload_size=512 00:21:45.343 [2024-04-27 02:41:18.911347] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911353] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911356] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911362] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.343 [2024-04-27 02:41:18.911368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.343 [2024-04-27 02:41:18.911371] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911374] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=512, cccid=6 00:21:45.343 [2024-04-27 02:41:18.911378] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3c2a0) on tqpair(0x19d3d10): expected_datao=0, payload_size=512 00:21:45.343 [2024-04-27 02:41:18.911383] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911389] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911392] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.343 [2024-04-27 02:41:18.911403] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.343 [2024-04-27 02:41:18.911407] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911410] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19d3d10): datao=0, datal=4096, cccid=7 00:21:45.343 [2024-04-27 02:41:18.911414] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a3c400) on tqpair(0x19d3d10): expected_datao=0, payload_size=4096 00:21:45.343 [2024-04-27 02:41:18.911418] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911427] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911430] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911436] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.343 [2024-04-27 02:41:18.911442] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.343 [2024-04-27 02:41:18.911445] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911449] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c140) on tqpair=0x19d3d10 00:21:45.343 [2024-04-27 02:41:18.911462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.343 [2024-04-27 02:41:18.911468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.343 [2024-04-27 02:41:18.911471] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911475] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3bfe0) on tqpair=0x19d3d10 00:21:45.343 [2024-04-27 02:41:18.911484] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.343 [2024-04-27 02:41:18.911490] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.343 [2024-04-27 02:41:18.911493] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911497] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c2a0) on tqpair=0x19d3d10 00:21:45.343 [2024-04-27 02:41:18.911504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.343 [2024-04-27 02:41:18.911510] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.343 [2024-04-27 02:41:18.911514] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.343 [2024-04-27 02:41:18.911517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c400) on tqpair=0x19d3d10 00:21:45.343 ===================================================== 00:21:45.343 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.343 ===================================================== 00:21:45.343 Controller Capabilities/Features 00:21:45.343 ================================ 00:21:45.343 Vendor ID: 8086 00:21:45.343 Subsystem Vendor ID: 8086 00:21:45.343 Serial Number: SPDK00000000000001 00:21:45.343 Model Number: SPDK bdev Controller 00:21:45.343 Firmware Version: 24.05 00:21:45.343 Recommended Arb Burst: 6 00:21:45.343 IEEE OUI Identifier: e4 d2 5c 00:21:45.343 Multi-path I/O 00:21:45.343 May have multiple subsystem ports: Yes 00:21:45.343 May have multiple controllers: Yes 00:21:45.343 Associated with SR-IOV VF: No 00:21:45.343 Max Data Transfer Size: 131072 00:21:45.343 Max Number of Namespaces: 32 00:21:45.343 Max Number of I/O Queues: 127 00:21:45.343 NVMe Specification Version (VS): 1.3 00:21:45.343 NVMe Specification Version (Identify): 1.3 00:21:45.343 Maximum Queue Entries: 128 00:21:45.343 Contiguous Queues Required: Yes 00:21:45.343 Arbitration Mechanisms Supported 00:21:45.343 Weighted Round Robin: Not Supported 00:21:45.343 Vendor Specific: Not Supported 00:21:45.343 Reset Timeout: 15000 ms 00:21:45.343 Doorbell Stride: 4 bytes 00:21:45.343 NVM Subsystem Reset: Not Supported 00:21:45.343 Command Sets Supported 00:21:45.343 NVM Command Set: Supported 00:21:45.343 Boot Partition: Not Supported 00:21:45.343 Memory Page Size Minimum: 4096 bytes 00:21:45.343 Memory Page Size Maximum: 4096 bytes 00:21:45.343 Persistent Memory Region: Not Supported 00:21:45.343 Optional Asynchronous Events Supported 00:21:45.343 Namespace Attribute Notices: Supported 00:21:45.343 Firmware Activation Notices: Not Supported 00:21:45.343 ANA Change Notices: Not Supported 00:21:45.343 PLE Aggregate Log Change Notices: Not Supported 00:21:45.343 LBA Status Info Alert Notices: Not Supported 00:21:45.343 EGE Aggregate Log Change Notices: Not Supported 00:21:45.343 Normal NVM Subsystem Shutdown event: Not Supported 00:21:45.343 Zone Descriptor Change Notices: Not Supported 00:21:45.343 Discovery Log Change Notices: Not Supported 00:21:45.343 Controller Attributes 00:21:45.343 128-bit Host Identifier: Supported 00:21:45.343 Non-Operational Permissive Mode: Not Supported 00:21:45.343 NVM Sets: Not Supported 00:21:45.343 Read Recovery Levels: Not Supported 00:21:45.343 Endurance Groups: Not Supported 00:21:45.343 Predictable Latency Mode: Not Supported 00:21:45.343 Traffic Based Keep ALive: Not Supported 00:21:45.343 Namespace Granularity: Not Supported 00:21:45.343 SQ Associations: Not Supported 00:21:45.343 UUID List: Not Supported 00:21:45.343 Multi-Domain Subsystem: Not Supported 00:21:45.343 Fixed Capacity Management: Not Supported 00:21:45.343 Variable Capacity Management: Not Supported 00:21:45.343 Delete Endurance Group: Not Supported 00:21:45.343 Delete NVM Set: Not Supported 00:21:45.343 Extended LBA Formats Supported: Not Supported 00:21:45.343 Flexible Data Placement Supported: Not Supported 00:21:45.343 00:21:45.343 Controller Memory Buffer Support 00:21:45.343 ================================ 00:21:45.343 Supported: No 00:21:45.343 00:21:45.343 Persistent Memory Region Support 00:21:45.343 ================================ 00:21:45.343 Supported: No 00:21:45.343 00:21:45.343 Admin Command Set Attributes 00:21:45.343 ============================ 00:21:45.343 Security Send/Receive: Not Supported 00:21:45.343 Format NVM: Not Supported 00:21:45.343 Firmware Activate/Download: Not Supported 00:21:45.343 Namespace Management: Not Supported 00:21:45.343 Device Self-Test: Not Supported 00:21:45.343 Directives: Not Supported 00:21:45.343 NVMe-MI: Not Supported 00:21:45.343 Virtualization Management: Not Supported 00:21:45.343 Doorbell Buffer Config: Not Supported 00:21:45.343 Get LBA Status Capability: Not Supported 00:21:45.343 Command & Feature Lockdown Capability: Not Supported 00:21:45.343 Abort Command Limit: 4 00:21:45.343 Async Event Request Limit: 4 00:21:45.343 Number of Firmware Slots: N/A 00:21:45.343 Firmware Slot 1 Read-Only: N/A 00:21:45.343 Firmware Activation Without Reset: N/A 00:21:45.343 Multiple Update Detection Support: N/A 00:21:45.343 Firmware Update Granularity: No Information Provided 00:21:45.343 Per-Namespace SMART Log: No 00:21:45.343 Asymmetric Namespace Access Log Page: Not Supported 00:21:45.343 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:45.343 Command Effects Log Page: Supported 00:21:45.343 Get Log Page Extended Data: Supported 00:21:45.343 Telemetry Log Pages: Not Supported 00:21:45.343 Persistent Event Log Pages: Not Supported 00:21:45.343 Supported Log Pages Log Page: May Support 00:21:45.343 Commands Supported & Effects Log Page: Not Supported 00:21:45.343 Feature Identifiers & Effects Log Page:May Support 00:21:45.343 NVMe-MI Commands & Effects Log Page: May Support 00:21:45.343 Data Area 4 for Telemetry Log: Not Supported 00:21:45.343 Error Log Page Entries Supported: 128 00:21:45.343 Keep Alive: Supported 00:21:45.343 Keep Alive Granularity: 10000 ms 00:21:45.343 00:21:45.343 NVM Command Set Attributes 00:21:45.343 ========================== 00:21:45.343 Submission Queue Entry Size 00:21:45.343 Max: 64 00:21:45.343 Min: 64 00:21:45.343 Completion Queue Entry Size 00:21:45.343 Max: 16 00:21:45.343 Min: 16 00:21:45.343 Number of Namespaces: 32 00:21:45.343 Compare Command: Supported 00:21:45.343 Write Uncorrectable Command: Not Supported 00:21:45.343 Dataset Management Command: Supported 00:21:45.343 Write Zeroes Command: Supported 00:21:45.344 Set Features Save Field: Not Supported 00:21:45.344 Reservations: Supported 00:21:45.344 Timestamp: Not Supported 00:21:45.344 Copy: Supported 00:21:45.344 Volatile Write Cache: Present 00:21:45.344 Atomic Write Unit (Normal): 1 00:21:45.344 Atomic Write Unit (PFail): 1 00:21:45.344 Atomic Compare & Write Unit: 1 00:21:45.344 Fused Compare & Write: Supported 00:21:45.344 Scatter-Gather List 00:21:45.344 SGL Command Set: Supported 00:21:45.344 SGL Keyed: Supported 00:21:45.344 SGL Bit Bucket Descriptor: Not Supported 00:21:45.344 SGL Metadata Pointer: Not Supported 00:21:45.344 Oversized SGL: Not Supported 00:21:45.344 SGL Metadata Address: Not Supported 00:21:45.344 SGL Offset: Supported 00:21:45.344 Transport SGL Data Block: Not Supported 00:21:45.344 Replay Protected Memory Block: Not Supported 00:21:45.344 00:21:45.344 Firmware Slot Information 00:21:45.344 ========================= 00:21:45.344 Active slot: 1 00:21:45.344 Slot 1 Firmware Revision: 24.05 00:21:45.344 00:21:45.344 00:21:45.344 Commands Supported and Effects 00:21:45.344 ============================== 00:21:45.344 Admin Commands 00:21:45.344 -------------- 00:21:45.344 Get Log Page (02h): Supported 00:21:45.344 Identify (06h): Supported 00:21:45.344 Abort (08h): Supported 00:21:45.344 Set Features (09h): Supported 00:21:45.344 Get Features (0Ah): Supported 00:21:45.344 Asynchronous Event Request (0Ch): Supported 00:21:45.344 Keep Alive (18h): Supported 00:21:45.344 I/O Commands 00:21:45.344 ------------ 00:21:45.344 Flush (00h): Supported LBA-Change 00:21:45.344 Write (01h): Supported LBA-Change 00:21:45.344 Read (02h): Supported 00:21:45.344 Compare (05h): Supported 00:21:45.344 Write Zeroes (08h): Supported LBA-Change 00:21:45.344 Dataset Management (09h): Supported LBA-Change 00:21:45.344 Copy (19h): Supported LBA-Change 00:21:45.344 Unknown (79h): Supported LBA-Change 00:21:45.344 Unknown (7Ah): Supported 00:21:45.344 00:21:45.344 Error Log 00:21:45.344 ========= 00:21:45.344 00:21:45.344 Arbitration 00:21:45.344 =========== 00:21:45.344 Arbitration Burst: 1 00:21:45.344 00:21:45.344 Power Management 00:21:45.344 ================ 00:21:45.344 Number of Power States: 1 00:21:45.344 Current Power State: Power State #0 00:21:45.344 Power State #0: 00:21:45.344 Max Power: 0.00 W 00:21:45.344 Non-Operational State: Operational 00:21:45.344 Entry Latency: Not Reported 00:21:45.344 Exit Latency: Not Reported 00:21:45.344 Relative Read Throughput: 0 00:21:45.344 Relative Read Latency: 0 00:21:45.344 Relative Write Throughput: 0 00:21:45.344 Relative Write Latency: 0 00:21:45.344 Idle Power: Not Reported 00:21:45.344 Active Power: Not Reported 00:21:45.344 Non-Operational Permissive Mode: Not Supported 00:21:45.344 00:21:45.344 Health Information 00:21:45.344 ================== 00:21:45.344 Critical Warnings: 00:21:45.344 Available Spare Space: OK 00:21:45.344 Temperature: OK 00:21:45.344 Device Reliability: OK 00:21:45.344 Read Only: No 00:21:45.344 Volatile Memory Backup: OK 00:21:45.344 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:45.344 Temperature Threshold: [2024-04-27 02:41:18.911622] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.911627] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19d3d10) 00:21:45.344 [2024-04-27 02:41:18.911633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.344 [2024-04-27 02:41:18.911646] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3c400, cid 7, qid 0 00:21:45.344 [2024-04-27 02:41:18.911892] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.344 [2024-04-27 02:41:18.911899] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.344 [2024-04-27 02:41:18.911902] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.911906] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3c400) on tqpair=0x19d3d10 00:21:45.344 [2024-04-27 02:41:18.911937] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:45.344 [2024-04-27 02:41:18.911948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.344 [2024-04-27 02:41:18.911955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.344 [2024-04-27 02:41:18.911961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.344 [2024-04-27 02:41:18.911967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.344 [2024-04-27 02:41:18.911975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.911979] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.911982] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.344 [2024-04-27 02:41:18.911989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.344 [2024-04-27 02:41:18.912002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.344 [2024-04-27 02:41:18.912219] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.344 [2024-04-27 02:41:18.912226] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.344 [2024-04-27 02:41:18.912229] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912233] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.344 [2024-04-27 02:41:18.912240] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912244] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912247] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.344 [2024-04-27 02:41:18.912254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.344 [2024-04-27 02:41:18.912268] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.344 [2024-04-27 02:41:18.912517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.344 [2024-04-27 02:41:18.912524] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.344 [2024-04-27 02:41:18.912528] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912532] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.344 [2024-04-27 02:41:18.912537] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:45.344 [2024-04-27 02:41:18.912541] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:45.344 [2024-04-27 02:41:18.912551] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912555] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.344 [2024-04-27 02:41:18.912565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.344 [2024-04-27 02:41:18.912576] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.344 [2024-04-27 02:41:18.912825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.344 [2024-04-27 02:41:18.912832] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.344 [2024-04-27 02:41:18.912835] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912839] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.344 [2024-04-27 02:41:18.912850] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.912857] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.344 [2024-04-27 02:41:18.912864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.344 [2024-04-27 02:41:18.912874] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.344 [2024-04-27 02:41:18.913226] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.344 [2024-04-27 02:41:18.913232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.344 [2024-04-27 02:41:18.913235] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.913239] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.344 [2024-04-27 02:41:18.913250] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.913253] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.913257] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.344 [2024-04-27 02:41:18.913264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.344 [2024-04-27 02:41:18.913283] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.344 [2024-04-27 02:41:18.913627] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.344 [2024-04-27 02:41:18.913633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.344 [2024-04-27 02:41:18.913637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.913640] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.344 [2024-04-27 02:41:18.913651] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.913655] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.344 [2024-04-27 02:41:18.913658] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.913665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.913675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.913900] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.913906] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.913909] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.913913] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.913923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.913927] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.913931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.913937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.913947] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.914184] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.914190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.914193] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.914207] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914215] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.914221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.914231] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.914483] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.914491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.914495] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.914509] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914513] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.914523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.914539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.914759] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.914765] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.914769] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914773] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.914783] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914787] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.914790] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.914797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.914807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.915031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.915037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.915041] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915044] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.915054] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915058] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915062] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.915068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.915078] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.915334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.915341] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.915344] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915348] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.915358] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915362] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915366] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.915372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.915383] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.915626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.915632] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.915635] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915639] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.915649] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915653] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915656] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.915663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.915673] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.915913] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.915920] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.915923] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915926] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.915937] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915940] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.915944] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.915951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.915961] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.916209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.916215] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.916219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916222] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.916232] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916240] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.916246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.916256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.916490] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.916497] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.916501] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916504] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.916515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916518] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916522] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.916528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.916539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.916769] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.916775] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.916778] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916782] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.916792] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916796] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.916799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.345 [2024-04-27 02:41:18.916806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.345 [2024-04-27 02:41:18.916816] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.345 [2024-04-27 02:41:18.917061] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.345 [2024-04-27 02:41:18.917067] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.345 [2024-04-27 02:41:18.917070] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.917074] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.345 [2024-04-27 02:41:18.917085] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.345 [2024-04-27 02:41:18.917088] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917092] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.346 [2024-04-27 02:41:18.917098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.346 [2024-04-27 02:41:18.917108] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.346 [2024-04-27 02:41:18.917337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.346 [2024-04-27 02:41:18.917344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.346 [2024-04-27 02:41:18.917348] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917351] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.346 [2024-04-27 02:41:18.917362] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917366] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917369] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.346 [2024-04-27 02:41:18.917376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.346 [2024-04-27 02:41:18.917386] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.346 [2024-04-27 02:41:18.917595] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.346 [2024-04-27 02:41:18.917601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.346 [2024-04-27 02:41:18.917605] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.346 [2024-04-27 02:41:18.917618] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917622] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917626] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.346 [2024-04-27 02:41:18.917632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.346 [2024-04-27 02:41:18.917642] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.346 [2024-04-27 02:41:18.917848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.346 [2024-04-27 02:41:18.917854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.346 [2024-04-27 02:41:18.917857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.346 [2024-04-27 02:41:18.917871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.917878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.346 [2024-04-27 02:41:18.917885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.346 [2024-04-27 02:41:18.917895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.346 [2024-04-27 02:41:18.918094] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.346 [2024-04-27 02:41:18.918103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.346 [2024-04-27 02:41:18.918107] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.918110] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.346 [2024-04-27 02:41:18.918121] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.918125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.918128] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.346 [2024-04-27 02:41:18.918135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.346 [2024-04-27 02:41:18.918145] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.346 [2024-04-27 02:41:18.922288] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.346 [2024-04-27 02:41:18.922296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.346 [2024-04-27 02:41:18.922300] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.922303] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.346 [2024-04-27 02:41:18.922314] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.922318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.922321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19d3d10) 00:21:45.346 [2024-04-27 02:41:18.922337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.346 [2024-04-27 02:41:18.922349] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a3be80, cid 3, qid 0 00:21:45.346 [2024-04-27 02:41:18.922558] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.346 [2024-04-27 02:41:18.922564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.346 [2024-04-27 02:41:18.922568] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.346 [2024-04-27 02:41:18.922571] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a3be80) on tqpair=0x19d3d10 00:21:45.346 [2024-04-27 02:41:18.922579] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 10 milliseconds 00:21:45.346 0 Kelvin (-273 Celsius) 00:21:45.346 Available Spare: 0% 00:21:45.346 Available Spare Threshold: 0% 00:21:45.346 Life Percentage Used: 0% 00:21:45.346 Data Units Read: 0 00:21:45.346 Data Units Written: 0 00:21:45.346 Host Read Commands: 0 00:21:45.346 Host Write Commands: 0 00:21:45.346 Controller Busy Time: 0 minutes 00:21:45.346 Power Cycles: 0 00:21:45.346 Power On Hours: 0 hours 00:21:45.346 Unsafe Shutdowns: 0 00:21:45.346 Unrecoverable Media Errors: 0 00:21:45.346 Lifetime Error Log Entries: 0 00:21:45.346 Warning Temperature Time: 0 minutes 00:21:45.346 Critical Temperature Time: 0 minutes 00:21:45.346 00:21:45.346 Number of Queues 00:21:45.346 ================ 00:21:45.346 Number of I/O Submission Queues: 127 00:21:45.346 Number of I/O Completion Queues: 127 00:21:45.346 00:21:45.346 Active Namespaces 00:21:45.346 ================= 00:21:45.346 Namespace ID:1 00:21:45.346 Error Recovery Timeout: Unlimited 00:21:45.346 Command Set Identifier: NVM (00h) 00:21:45.346 Deallocate: Supported 00:21:45.346 Deallocated/Unwritten Error: Not Supported 00:21:45.346 Deallocated Read Value: Unknown 00:21:45.346 Deallocate in Write Zeroes: Not Supported 00:21:45.346 Deallocated Guard Field: 0xFFFF 00:21:45.346 Flush: Supported 00:21:45.346 Reservation: Supported 00:21:45.346 Namespace Sharing Capabilities: Multiple Controllers 00:21:45.346 Size (in LBAs): 131072 (0GiB) 00:21:45.346 Capacity (in LBAs): 131072 (0GiB) 00:21:45.346 Utilization (in LBAs): 131072 (0GiB) 00:21:45.346 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:45.346 EUI64: ABCDEF0123456789 00:21:45.346 UUID: 3afb3eeb-1f07-4b34-b839-75f0fb76acdf 00:21:45.346 Thin Provisioning: Not Supported 00:21:45.346 Per-NS Atomic Units: Yes 00:21:45.346 Atomic Boundary Size (Normal): 0 00:21:45.346 Atomic Boundary Size (PFail): 0 00:21:45.346 Atomic Boundary Offset: 0 00:21:45.346 Maximum Single Source Range Length: 65535 00:21:45.346 Maximum Copy Length: 65535 00:21:45.346 Maximum Source Range Count: 1 00:21:45.346 NGUID/EUI64 Never Reused: No 00:21:45.346 Namespace Write Protected: No 00:21:45.346 Number of LBA Formats: 1 00:21:45.346 Current LBA Format: LBA Format #00 00:21:45.346 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:45.346 00:21:45.346 02:41:18 -- host/identify.sh@51 -- # sync 00:21:45.346 02:41:18 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.346 02:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.346 02:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:45.346 02:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.346 02:41:18 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:45.346 02:41:18 -- host/identify.sh@56 -- # nvmftestfini 00:21:45.346 02:41:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:45.346 02:41:18 -- nvmf/common.sh@117 -- # sync 00:21:45.346 02:41:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.346 02:41:18 -- nvmf/common.sh@120 -- # set +e 00:21:45.346 02:41:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.346 02:41:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.607 rmmod nvme_tcp 00:21:45.607 rmmod nvme_fabrics 00:21:45.607 rmmod nvme_keyring 00:21:45.607 02:41:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.607 02:41:19 -- nvmf/common.sh@124 -- # set -e 00:21:45.607 02:41:19 -- nvmf/common.sh@125 -- # return 0 00:21:45.607 02:41:19 -- nvmf/common.sh@478 -- # '[' -n 198451 ']' 00:21:45.607 02:41:19 -- nvmf/common.sh@479 -- # killprocess 198451 00:21:45.607 02:41:19 -- common/autotest_common.sh@936 -- # '[' -z 198451 ']' 00:21:45.607 02:41:19 -- common/autotest_common.sh@940 -- # kill -0 198451 00:21:45.607 02:41:19 -- common/autotest_common.sh@941 -- # uname 00:21:45.607 02:41:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.607 02:41:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 198451 00:21:45.607 02:41:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.607 02:41:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.607 02:41:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 198451' 00:21:45.607 killing process with pid 198451 00:21:45.607 02:41:19 -- common/autotest_common.sh@955 -- # kill 198451 00:21:45.608 [2024-04-27 02:41:19.066821] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:45.608 02:41:19 -- common/autotest_common.sh@960 -- # wait 198451 00:21:45.608 02:41:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:45.608 02:41:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:45.608 02:41:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:45.608 02:41:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.608 02:41:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.608 02:41:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.608 02:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.608 02:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.157 02:41:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.157 00:21:48.157 real 0m10.803s 00:21:48.157 user 0m7.958s 00:21:48.157 sys 0m5.504s 00:21:48.157 02:41:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:48.157 02:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:48.157 ************************************ 00:21:48.157 END TEST nvmf_identify 00:21:48.157 ************************************ 00:21:48.157 02:41:21 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:48.157 02:41:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:48.157 02:41:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:48.157 02:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:48.157 ************************************ 00:21:48.157 START TEST nvmf_perf 00:21:48.157 ************************************ 00:21:48.157 02:41:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:48.157 * Looking for test storage... 00:21:48.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.157 02:41:21 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.157 02:41:21 -- nvmf/common.sh@7 -- # uname -s 00:21:48.157 02:41:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.157 02:41:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.157 02:41:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.157 02:41:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.157 02:41:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.157 02:41:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.157 02:41:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.157 02:41:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.157 02:41:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.157 02:41:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.157 02:41:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.157 02:41:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.157 02:41:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.157 02:41:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.157 02:41:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.157 02:41:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.157 02:41:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.157 02:41:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.157 02:41:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.157 02:41:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.157 02:41:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.157 02:41:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.157 02:41:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.157 02:41:21 -- paths/export.sh@5 -- # export PATH 00:21:48.157 02:41:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.157 02:41:21 -- nvmf/common.sh@47 -- # : 0 00:21:48.157 02:41:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.157 02:41:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.157 02:41:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.157 02:41:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.157 02:41:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.157 02:41:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.157 02:41:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.157 02:41:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.157 02:41:21 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:48.157 02:41:21 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:48.157 02:41:21 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.157 02:41:21 -- host/perf.sh@17 -- # nvmftestinit 00:21:48.157 02:41:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:48.157 02:41:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.157 02:41:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:48.157 02:41:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:48.157 02:41:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:48.157 02:41:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.157 02:41:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.157 02:41:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.157 02:41:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:48.157 02:41:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:48.157 02:41:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.157 02:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:56.304 02:41:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:56.304 02:41:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.304 02:41:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.304 02:41:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.304 02:41:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.304 02:41:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.304 02:41:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.304 02:41:28 -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.304 02:41:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.304 02:41:28 -- nvmf/common.sh@296 -- # e810=() 00:21:56.304 02:41:28 -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.304 02:41:28 -- nvmf/common.sh@297 -- # x722=() 00:21:56.304 02:41:28 -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.304 02:41:28 -- nvmf/common.sh@298 -- # mlx=() 00:21:56.304 02:41:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.304 02:41:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.304 02:41:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.304 02:41:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.304 02:41:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.304 02:41:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.304 02:41:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:56.304 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:56.304 02:41:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.304 02:41:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:56.304 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:56.304 02:41:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.304 02:41:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.304 02:41:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.304 02:41:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:56.304 02:41:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.304 02:41:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:56.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:56.304 02:41:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.304 02:41:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.304 02:41:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.304 02:41:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:56.304 02:41:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.304 02:41:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:56.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:56.304 02:41:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.304 02:41:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:56.304 02:41:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:56.304 02:41:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:56.304 02:41:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.304 02:41:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.304 02:41:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.304 02:41:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.304 02:41:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.304 02:41:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.304 02:41:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.304 02:41:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.304 02:41:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.304 02:41:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.304 02:41:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.304 02:41:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.304 02:41:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.304 02:41:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.304 02:41:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.304 02:41:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.304 02:41:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.304 02:41:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.304 02:41:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.304 02:41:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:21:56.304 00:21:56.304 --- 10.0.0.2 ping statistics --- 00:21:56.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.304 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:21:56.304 02:41:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:21:56.304 00:21:56.304 --- 10.0.0.1 ping statistics --- 00:21:56.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.304 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:21:56.304 02:41:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.304 02:41:28 -- nvmf/common.sh@411 -- # return 0 00:21:56.304 02:41:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:56.304 02:41:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.304 02:41:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:56.304 02:41:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.304 02:41:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:56.304 02:41:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:56.304 02:41:28 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:56.304 02:41:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:56.304 02:41:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:56.304 02:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:56.304 02:41:28 -- nvmf/common.sh@470 -- # nvmfpid=202815 00:21:56.304 02:41:28 -- nvmf/common.sh@471 -- # waitforlisten 202815 00:21:56.304 02:41:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.304 02:41:28 -- common/autotest_common.sh@817 -- # '[' -z 202815 ']' 00:21:56.304 02:41:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.304 02:41:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:56.304 02:41:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.304 02:41:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:56.305 02:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:56.305 [2024-04-27 02:41:28.864704] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:56.305 [2024-04-27 02:41:28.864774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.305 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.305 [2024-04-27 02:41:28.936910] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.305 [2024-04-27 02:41:29.009798] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.305 [2024-04-27 02:41:29.009838] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.305 [2024-04-27 02:41:29.009848] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.305 [2024-04-27 02:41:29.009855] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.305 [2024-04-27 02:41:29.009862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.305 [2024-04-27 02:41:29.009907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.305 [2024-04-27 02:41:29.010045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.305 [2024-04-27 02:41:29.010071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.305 [2024-04-27 02:41:29.010074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.305 02:41:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:56.305 02:41:29 -- common/autotest_common.sh@850 -- # return 0 00:21:56.305 02:41:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:56.305 02:41:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:56.305 02:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:56.305 02:41:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.305 02:41:29 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:56.305 02:41:29 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:56.566 02:41:30 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:56.566 02:41:30 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:56.827 02:41:30 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:21:56.827 02:41:30 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:57.088 02:41:30 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:57.088 02:41:30 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:21:57.088 02:41:30 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:57.088 02:41:30 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:57.088 02:41:30 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:57.088 [2024-04-27 02:41:30.651467] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.088 02:41:30 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.359 02:41:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:57.359 02:41:30 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.658 02:41:31 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:57.658 02:41:31 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:57.658 02:41:31 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.922 [2024-04-27 02:41:31.329941] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.922 02:41:31 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.922 02:41:31 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:21:57.922 02:41:31 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:57.922 02:41:31 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:57.922 02:41:31 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:59.309 Initializing NVMe Controllers 00:21:59.309 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:21:59.309 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:21:59.309 Initialization complete. Launching workers. 00:21:59.309 ======================================================== 00:21:59.309 Latency(us) 00:21:59.309 Device Information : IOPS MiB/s Average min max 00:21:59.309 PCIE (0000:65:00.0) NSID 1 from core 0: 81028.08 316.52 394.34 75.40 4429.01 00:21:59.309 ======================================================== 00:21:59.309 Total : 81028.08 316.52 394.34 75.40 4429.01 00:21:59.309 00:21:59.309 02:41:32 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:59.309 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.690 Initializing NVMe Controllers 00:22:00.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:00.690 Initialization complete. Launching workers. 00:22:00.690 ======================================================== 00:22:00.690 Latency(us) 00:22:00.690 Device Information : IOPS MiB/s Average min max 00:22:00.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11200.72 543.39 49405.35 00:22:00.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17155.78 4971.34 51876.95 00:22:00.691 ======================================================== 00:22:00.691 Total : 151.00 0.59 13606.41 543.39 51876.95 00:22:00.691 00:22:00.691 02:41:33 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:00.691 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.630 Initializing NVMe Controllers 00:22:01.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:01.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:01.630 Initialization complete. Launching workers. 00:22:01.630 ======================================================== 00:22:01.630 Latency(us) 00:22:01.630 Device Information : IOPS MiB/s Average min max 00:22:01.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6987.00 27.29 4582.90 899.45 9335.78 00:22:01.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3629.00 14.18 8873.67 6519.97 16562.21 00:22:01.630 ======================================================== 00:22:01.630 Total : 10616.00 41.47 6049.67 899.45 16562.21 00:22:01.630 00:22:01.630 02:41:35 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:01.630 02:41:35 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:01.630 02:41:35 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:01.630 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.174 Initializing NVMe Controllers 00:22:04.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.174 Controller IO queue size 128, less than required. 00:22:04.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.174 Controller IO queue size 128, less than required. 00:22:04.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.174 Initialization complete. Launching workers. 00:22:04.174 ======================================================== 00:22:04.174 Latency(us) 00:22:04.174 Device Information : IOPS MiB/s Average min max 00:22:04.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 782.45 195.61 168588.27 84473.31 223204.98 00:22:04.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.59 144.65 233603.36 101305.34 343822.33 00:22:04.174 ======================================================== 00:22:04.174 Total : 1361.04 340.26 196226.84 84473.31 343822.33 00:22:04.174 00:22:04.174 02:41:37 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:04.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.436 No valid NVMe controllers or AIO or URING devices found 00:22:04.436 Initializing NVMe Controllers 00:22:04.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.436 Controller IO queue size 128, less than required. 00:22:04.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.436 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:04.436 Controller IO queue size 128, less than required. 00:22:04.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:04.436 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:04.436 WARNING: Some requested NVMe devices were skipped 00:22:04.436 02:41:37 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:04.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.739 Initializing NVMe Controllers 00:22:07.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.739 Controller IO queue size 128, less than required. 00:22:07.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.739 Controller IO queue size 128, less than required. 00:22:07.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:07.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:07.739 Initialization complete. Launching workers. 00:22:07.739 00:22:07.739 ==================== 00:22:07.739 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:07.739 TCP transport: 00:22:07.739 polls: 40032 00:22:07.739 idle_polls: 12706 00:22:07.739 sock_completions: 27326 00:22:07.739 nvme_completions: 3631 00:22:07.739 submitted_requests: 5480 00:22:07.739 queued_requests: 1 00:22:07.739 00:22:07.739 ==================== 00:22:07.739 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:07.739 TCP transport: 00:22:07.739 polls: 40256 00:22:07.739 idle_polls: 13143 00:22:07.739 sock_completions: 27113 00:22:07.739 nvme_completions: 3589 00:22:07.739 submitted_requests: 5352 00:22:07.739 queued_requests: 1 00:22:07.739 ======================================================== 00:22:07.739 Latency(us) 00:22:07.739 Device Information : IOPS MiB/s Average min max 00:22:07.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 907.30 226.83 146351.52 84352.27 247968.80 00:22:07.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 896.80 224.20 146949.44 88455.43 225029.55 00:22:07.739 ======================================================== 00:22:07.739 Total : 1804.11 451.03 146648.74 84352.27 247968.80 00:22:07.739 00:22:07.739 02:41:40 -- host/perf.sh@66 -- # sync 00:22:07.739 02:41:40 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.739 02:41:40 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:07.739 02:41:40 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:07.739 02:41:40 -- host/perf.sh@114 -- # nvmftestfini 00:22:07.739 02:41:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:07.739 02:41:40 -- nvmf/common.sh@117 -- # sync 00:22:07.739 02:41:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.739 02:41:40 -- nvmf/common.sh@120 -- # set +e 00:22:07.739 02:41:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.739 02:41:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.739 rmmod nvme_tcp 00:22:07.739 rmmod nvme_fabrics 00:22:07.739 rmmod nvme_keyring 00:22:07.739 02:41:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.739 02:41:40 -- nvmf/common.sh@124 -- # set -e 00:22:07.739 02:41:40 -- nvmf/common.sh@125 -- # return 0 00:22:07.739 02:41:40 -- nvmf/common.sh@478 -- # '[' -n 202815 ']' 00:22:07.739 02:41:40 -- nvmf/common.sh@479 -- # killprocess 202815 00:22:07.739 02:41:40 -- common/autotest_common.sh@936 -- # '[' -z 202815 ']' 00:22:07.739 02:41:40 -- common/autotest_common.sh@940 -- # kill -0 202815 00:22:07.739 02:41:40 -- common/autotest_common.sh@941 -- # uname 00:22:07.739 02:41:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:07.739 02:41:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 202815 00:22:07.739 02:41:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:07.739 02:41:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:07.739 02:41:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 202815' 00:22:07.739 killing process with pid 202815 00:22:07.739 02:41:40 -- common/autotest_common.sh@955 -- # kill 202815 00:22:07.739 02:41:40 -- common/autotest_common.sh@960 -- # wait 202815 00:22:09.652 02:41:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:09.652 02:41:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:09.652 02:41:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:09.652 02:41:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.652 02:41:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.652 02:41:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.652 02:41:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.652 02:41:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.568 02:41:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.568 00:22:11.568 real 0m23.508s 00:22:11.568 user 0m57.197s 00:22:11.568 sys 0m7.626s 00:22:11.568 02:41:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:11.568 02:41:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.568 ************************************ 00:22:11.568 END TEST nvmf_perf 00:22:11.568 ************************************ 00:22:11.568 02:41:45 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:11.568 02:41:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:11.568 02:41:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.569 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:22:11.831 ************************************ 00:22:11.831 START TEST nvmf_fio_host 00:22:11.831 ************************************ 00:22:11.831 02:41:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:11.831 * Looking for test storage... 00:22:11.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.831 02:41:45 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.831 02:41:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.831 02:41:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.831 02:41:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.831 02:41:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.831 02:41:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.831 02:41:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.831 02:41:45 -- paths/export.sh@5 -- # export PATH 00:22:11.831 02:41:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.831 02:41:45 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.831 02:41:45 -- nvmf/common.sh@7 -- # uname -s 00:22:11.831 02:41:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.831 02:41:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.831 02:41:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.831 02:41:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.831 02:41:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.831 02:41:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.831 02:41:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.831 02:41:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.831 02:41:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.831 02:41:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.831 02:41:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.831 02:41:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.831 02:41:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.831 02:41:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.831 02:41:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.831 02:41:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.831 02:41:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.831 02:41:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.831 02:41:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.831 02:41:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.831 02:41:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.832 02:41:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.832 02:41:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.832 02:41:45 -- paths/export.sh@5 -- # export PATH 00:22:11.832 02:41:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.832 02:41:45 -- nvmf/common.sh@47 -- # : 0 00:22:11.832 02:41:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.832 02:41:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.832 02:41:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.832 02:41:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.832 02:41:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.832 02:41:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.832 02:41:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.832 02:41:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.832 02:41:45 -- host/fio.sh@12 -- # nvmftestinit 00:22:11.832 02:41:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:11.832 02:41:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.832 02:41:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:11.832 02:41:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:11.832 02:41:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:11.832 02:41:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.832 02:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.832 02:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.832 02:41:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:11.832 02:41:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:11.832 02:41:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.832 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:22:19.979 02:41:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:19.979 02:41:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.979 02:41:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.979 02:41:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.979 02:41:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.979 02:41:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.979 02:41:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.979 02:41:52 -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.979 02:41:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.979 02:41:52 -- nvmf/common.sh@296 -- # e810=() 00:22:19.979 02:41:52 -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.979 02:41:52 -- nvmf/common.sh@297 -- # x722=() 00:22:19.979 02:41:52 -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.979 02:41:52 -- nvmf/common.sh@298 -- # mlx=() 00:22:19.979 02:41:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.979 02:41:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.979 02:41:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.979 02:41:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.979 02:41:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.979 02:41:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.979 02:41:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:19.979 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:19.979 02:41:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.979 02:41:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:19.979 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:19.979 02:41:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.979 02:41:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.979 02:41:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.979 02:41:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:19.979 02:41:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.979 02:41:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:19.979 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:19.979 02:41:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.979 02:41:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.979 02:41:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.979 02:41:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:19.979 02:41:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.979 02:41:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:19.979 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:19.979 02:41:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.979 02:41:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:19.979 02:41:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:19.979 02:41:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:19.979 02:41:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.979 02:41:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.979 02:41:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.979 02:41:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.979 02:41:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.979 02:41:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.979 02:41:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.979 02:41:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.979 02:41:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.979 02:41:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.979 02:41:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.979 02:41:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.979 02:41:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.979 02:41:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.979 02:41:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.979 02:41:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.979 02:41:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.979 02:41:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.979 02:41:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.979 02:41:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:22:19.979 00:22:19.979 --- 10.0.0.2 ping statistics --- 00:22:19.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.979 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:22:19.979 02:41:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:22:19.979 00:22:19.979 --- 10.0.0.1 ping statistics --- 00:22:19.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.979 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:22:19.979 02:41:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.979 02:41:52 -- nvmf/common.sh@411 -- # return 0 00:22:19.979 02:41:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:19.979 02:41:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.979 02:41:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:19.979 02:41:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.979 02:41:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:19.979 02:41:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:19.979 02:41:52 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:19.979 02:41:52 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:19.979 02:41:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:19.979 02:41:52 -- common/autotest_common.sh@10 -- # set +x 00:22:19.979 02:41:52 -- host/fio.sh@22 -- # nvmfpid=209871 00:22:19.979 02:41:52 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.979 02:41:52 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.979 02:41:52 -- host/fio.sh@26 -- # waitforlisten 209871 00:22:19.979 02:41:52 -- common/autotest_common.sh@817 -- # '[' -z 209871 ']' 00:22:19.979 02:41:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.979 02:41:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.979 02:41:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.979 02:41:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.979 02:41:52 -- common/autotest_common.sh@10 -- # set +x 00:22:19.979 [2024-04-27 02:41:52.563457] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:22:19.979 [2024-04-27 02:41:52.563520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.979 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.979 [2024-04-27 02:41:52.634695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.979 [2024-04-27 02:41:52.707465] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.979 [2024-04-27 02:41:52.707504] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.979 [2024-04-27 02:41:52.707513] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.979 [2024-04-27 02:41:52.707520] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.979 [2024-04-27 02:41:52.707527] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.979 [2024-04-27 02:41:52.707656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.979 [2024-04-27 02:41:52.707769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.979 [2024-04-27 02:41:52.707797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.979 [2024-04-27 02:41:52.707799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.979 02:41:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:19.979 02:41:53 -- common/autotest_common.sh@850 -- # return 0 00:22:19.979 02:41:53 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.979 02:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.979 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 [2024-04-27 02:41:53.351795] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.980 02:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.980 02:41:53 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:19.980 02:41:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:19.980 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 02:41:53 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:19.980 02:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.980 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 Malloc1 00:22:19.980 02:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.980 02:41:53 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.980 02:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.980 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 02:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.980 02:41:53 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:19.980 02:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.980 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 02:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.980 02:41:53 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.980 02:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.980 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 [2024-04-27 02:41:53.451329] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.980 02:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.980 02:41:53 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:19.980 02:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.980 02:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.980 02:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.980 02:41:53 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:19.980 02:41:53 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:19.980 02:41:53 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:19.980 02:41:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:19.980 02:41:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.980 02:41:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:19.980 02:41:53 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.980 02:41:53 -- common/autotest_common.sh@1327 -- # shift 00:22:19.980 02:41:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:19.980 02:41:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:19.980 02:41:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:19.980 02:41:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:19.980 02:41:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:19.980 02:41:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:19.980 02:41:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:19.980 02:41:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:20.240 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:20.240 fio-3.35 00:22:20.240 Starting 1 thread 00:22:20.500 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.046 00:22:23.046 test: (groupid=0, jobs=1): err= 0: pid=210317: Sat Apr 27 02:41:56 2024 00:22:23.046 read: IOPS=9371, BW=36.6MiB/s (38.4MB/s)(73.4MiB/2004msec) 00:22:23.046 slat (usec): min=2, max=278, avg= 2.20, stdev= 2.81 00:22:23.046 clat (usec): min=3746, max=14422, avg=7731.25, stdev=1228.41 00:22:23.046 lat (usec): min=3748, max=14424, avg=7733.44, stdev=1228.41 00:22:23.046 clat percentiles (usec): 00:22:23.046 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 6980], 00:22:23.046 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:22:23.046 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8979], 95.00th=[10421], 00:22:23.046 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13829], 99.95th=[13829], 00:22:23.046 | 99.99th=[14222] 00:22:23.046 bw ( KiB/s): min=36208, max=38048, per=99.74%, avg=37390.00, stdev=835.54, samples=4 00:22:23.046 iops : min= 9052, max= 9512, avg=9347.50, stdev=208.89, samples=4 00:22:23.046 write: IOPS=9374, BW=36.6MiB/s (38.4MB/s)(73.4MiB/2004msec); 0 zone resets 00:22:23.046 slat (usec): min=2, max=257, avg= 2.30, stdev= 2.09 00:22:23.046 clat (usec): min=2620, max=9653, avg=5856.32, stdev=781.61 00:22:23.046 lat (usec): min=2623, max=9660, avg=5858.62, stdev=781.68 00:22:23.046 clat percentiles (usec): 00:22:23.046 | 1.00th=[ 3523], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5342], 00:22:23.046 | 30.00th=[ 5604], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:22:23.046 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6652], 95.00th=[ 6915], 00:22:23.046 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9241], 99.95th=[ 9372], 00:22:23.046 | 99.99th=[ 9634] 00:22:23.046 bw ( KiB/s): min=37024, max=38016, per=99.98%, avg=37492.00, stdev=426.76, samples=4 00:22:23.046 iops : min= 9256, max= 9504, avg=9373.00, stdev=106.69, samples=4 00:22:23.046 lat (msec) : 4=1.18%, 10=95.76%, 20=3.06% 00:22:23.046 cpu : usr=64.25%, sys=28.16%, ctx=42, majf=0, minf=5 00:22:23.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:23.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:23.046 issued rwts: total=18781,18787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:23.046 00:22:23.046 Run status group 0 (all jobs): 00:22:23.046 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.4MiB (76.9MB), run=2004-2004msec 00:22:23.046 WRITE: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=73.4MiB (77.0MB), run=2004-2004msec 00:22:23.046 02:41:56 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:23.046 02:41:56 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:23.046 02:41:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:23.046 02:41:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:23.047 02:41:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:23.047 02:41:56 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:23.047 02:41:56 -- common/autotest_common.sh@1327 -- # shift 00:22:23.047 02:41:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:23.047 02:41:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:23.047 02:41:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:23.047 02:41:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:23.047 02:41:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:23.047 02:41:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:23.047 02:41:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:23.047 02:41:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:23.047 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:23.047 fio-3.35 00:22:23.047 Starting 1 thread 00:22:23.047 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.591 00:22:25.591 test: (groupid=0, jobs=1): err= 0: pid=210900: Sat Apr 27 02:41:58 2024 00:22:25.591 read: IOPS=8455, BW=132MiB/s (139MB/s)(266MiB/2011msec) 00:22:25.591 slat (usec): min=3, max=113, avg= 3.62, stdev= 1.68 00:22:25.591 clat (usec): min=3110, max=28726, avg=9329.87, stdev=3249.29 00:22:25.591 lat (usec): min=3113, max=28729, avg=9333.49, stdev=3249.70 00:22:25.592 clat percentiles (usec): 00:22:25.592 | 1.00th=[ 4424], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6718], 00:22:25.592 | 30.00th=[ 7439], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:22:25.592 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13566], 95.00th=[15139], 00:22:25.592 | 99.00th=[20317], 99.50th=[22152], 99.90th=[28181], 99.95th=[28443], 00:22:25.592 | 99.99th=[28443] 00:22:25.592 bw ( KiB/s): min=59808, max=81792, per=51.18%, avg=69240.00, stdev=10339.56, samples=4 00:22:25.592 iops : min= 3738, max= 5112, avg=4327.50, stdev=646.22, samples=4 00:22:25.592 write: IOPS=4944, BW=77.3MiB/s (81.0MB/s)(140MiB/1816msec); 0 zone resets 00:22:25.592 slat (usec): min=40, max=448, avg=41.16, stdev= 8.60 00:22:25.592 clat (usec): min=3182, max=28057, avg=9886.73, stdev=2704.10 00:22:25.592 lat (usec): min=3222, max=28101, avg=9927.89, stdev=2707.43 00:22:25.592 clat percentiles (usec): 00:22:25.592 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8160], 00:22:25.592 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:22:25.592 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12125], 95.00th=[13173], 00:22:25.592 | 99.00th=[22676], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:22:25.592 | 99.99th=[28181] 00:22:25.592 bw ( KiB/s): min=62080, max=84992, per=90.80%, avg=71840.00, stdev=10955.25, samples=4 00:22:25.592 iops : min= 3880, max= 5312, avg=4490.00, stdev=684.70, samples=4 00:22:25.592 lat (msec) : 4=0.18%, 10=64.94%, 20=33.44%, 50=1.43% 00:22:25.592 cpu : usr=81.49%, sys=14.18%, ctx=16, majf=0, minf=12 00:22:25.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:25.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:25.592 issued rwts: total=17004,8980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:25.592 00:22:25.592 Run status group 0 (all jobs): 00:22:25.592 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2011-2011msec 00:22:25.592 WRITE: bw=77.3MiB/s (81.0MB/s), 77.3MiB/s-77.3MiB/s (81.0MB/s-81.0MB/s), io=140MiB (147MB), run=1816-1816msec 00:22:25.592 02:41:59 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.592 02:41:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.592 02:41:59 -- common/autotest_common.sh@10 -- # set +x 00:22:25.592 02:41:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.592 02:41:59 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:25.592 02:41:59 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:25.592 02:41:59 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:25.592 02:41:59 -- host/fio.sh@84 -- # nvmftestfini 00:22:25.592 02:41:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:25.592 02:41:59 -- nvmf/common.sh@117 -- # sync 00:22:25.592 02:41:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.592 02:41:59 -- nvmf/common.sh@120 -- # set +e 00:22:25.592 02:41:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.592 02:41:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.592 rmmod nvme_tcp 00:22:25.592 rmmod nvme_fabrics 00:22:25.592 rmmod nvme_keyring 00:22:25.592 02:41:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.592 02:41:59 -- nvmf/common.sh@124 -- # set -e 00:22:25.592 02:41:59 -- nvmf/common.sh@125 -- # return 0 00:22:25.592 02:41:59 -- nvmf/common.sh@478 -- # '[' -n 209871 ']' 00:22:25.592 02:41:59 -- nvmf/common.sh@479 -- # killprocess 209871 00:22:25.592 02:41:59 -- common/autotest_common.sh@936 -- # '[' -z 209871 ']' 00:22:25.592 02:41:59 -- common/autotest_common.sh@940 -- # kill -0 209871 00:22:25.592 02:41:59 -- common/autotest_common.sh@941 -- # uname 00:22:25.592 02:41:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.592 02:41:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 209871 00:22:25.592 02:41:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:25.592 02:41:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:25.592 02:41:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 209871' 00:22:25.592 killing process with pid 209871 00:22:25.592 02:41:59 -- common/autotest_common.sh@955 -- # kill 209871 00:22:25.592 02:41:59 -- common/autotest_common.sh@960 -- # wait 209871 00:22:25.852 02:41:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:25.852 02:41:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:25.852 02:41:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:25.852 02:41:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.852 02:41:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.852 02:41:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.852 02:41:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.852 02:41:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.766 02:42:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.028 00:22:28.028 real 0m16.193s 00:22:28.028 user 1m3.490s 00:22:28.028 sys 0m7.184s 00:22:28.028 02:42:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:28.028 02:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:28.028 ************************************ 00:22:28.028 END TEST nvmf_fio_host 00:22:28.028 ************************************ 00:22:28.028 02:42:01 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:28.028 02:42:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:28.028 02:42:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:28.028 02:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:28.028 ************************************ 00:22:28.028 START TEST nvmf_failover 00:22:28.028 ************************************ 00:22:28.028 02:42:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:28.291 * Looking for test storage... 00:22:28.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.291 02:42:01 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.291 02:42:01 -- nvmf/common.sh@7 -- # uname -s 00:22:28.291 02:42:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.291 02:42:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.291 02:42:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.291 02:42:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.291 02:42:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.291 02:42:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.291 02:42:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.291 02:42:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.291 02:42:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.291 02:42:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.291 02:42:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.291 02:42:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.291 02:42:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.291 02:42:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.291 02:42:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.291 02:42:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.291 02:42:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.291 02:42:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.291 02:42:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.291 02:42:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.291 02:42:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.291 02:42:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.291 02:42:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.291 02:42:01 -- paths/export.sh@5 -- # export PATH 00:22:28.291 02:42:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.291 02:42:01 -- nvmf/common.sh@47 -- # : 0 00:22:28.291 02:42:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.291 02:42:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.291 02:42:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.291 02:42:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.291 02:42:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.291 02:42:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.291 02:42:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.291 02:42:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.291 02:42:01 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:28.291 02:42:01 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:28.291 02:42:01 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.291 02:42:01 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.291 02:42:01 -- host/failover.sh@18 -- # nvmftestinit 00:22:28.291 02:42:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:28.291 02:42:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.291 02:42:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:28.291 02:42:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:28.291 02:42:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:28.291 02:42:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.291 02:42:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.291 02:42:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.291 02:42:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:28.291 02:42:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:28.291 02:42:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.291 02:42:01 -- common/autotest_common.sh@10 -- # set +x 00:22:34.881 02:42:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:34.881 02:42:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.881 02:42:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.881 02:42:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.881 02:42:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.881 02:42:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.881 02:42:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.881 02:42:08 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.881 02:42:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.881 02:42:08 -- nvmf/common.sh@296 -- # e810=() 00:22:34.881 02:42:08 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.881 02:42:08 -- nvmf/common.sh@297 -- # x722=() 00:22:34.881 02:42:08 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.881 02:42:08 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.881 02:42:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.881 02:42:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.881 02:42:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.881 02:42:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.881 02:42:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.881 02:42:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.881 02:42:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:34.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:34.881 02:42:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.881 02:42:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:34.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:34.881 02:42:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.881 02:42:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.881 02:42:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.881 02:42:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:34.881 02:42:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.881 02:42:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:34.881 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:34.881 02:42:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.881 02:42:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.881 02:42:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.881 02:42:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:34.881 02:42:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.881 02:42:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:34.881 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:34.881 02:42:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.881 02:42:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:34.881 02:42:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:34.881 02:42:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:34.881 02:42:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:34.881 02:42:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.881 02:42:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.881 02:42:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.881 02:42:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.881 02:42:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.881 02:42:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.881 02:42:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.881 02:42:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.881 02:42:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.881 02:42:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.881 02:42:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.881 02:42:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.881 02:42:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.881 02:42:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.881 02:42:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.881 02:42:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.881 02:42:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.881 02:42:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.881 02:42:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.881 02:42:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:22:34.881 00:22:34.881 --- 10.0.0.2 ping statistics --- 00:22:34.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.881 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:22:35.152 02:42:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:22:35.152 00:22:35.152 --- 10.0.0.1 ping statistics --- 00:22:35.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.152 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:35.152 02:42:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.152 02:42:08 -- nvmf/common.sh@411 -- # return 0 00:22:35.152 02:42:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:35.152 02:42:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.152 02:42:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:35.152 02:42:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:35.152 02:42:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.152 02:42:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:35.152 02:42:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:35.152 02:42:08 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:35.152 02:42:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:35.152 02:42:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:35.152 02:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.152 02:42:08 -- nvmf/common.sh@470 -- # nvmfpid=215663 00:22:35.152 02:42:08 -- nvmf/common.sh@471 -- # waitforlisten 215663 00:22:35.152 02:42:08 -- common/autotest_common.sh@817 -- # '[' -z 215663 ']' 00:22:35.152 02:42:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.152 02:42:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:35.152 02:42:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.152 02:42:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:35.152 02:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.152 02:42:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:35.152 [2024-04-27 02:42:08.594767] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:22:35.152 [2024-04-27 02:42:08.594831] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.152 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.152 [2024-04-27 02:42:08.667158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.152 [2024-04-27 02:42:08.738727] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.152 [2024-04-27 02:42:08.738784] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.152 [2024-04-27 02:42:08.738794] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.152 [2024-04-27 02:42:08.738801] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.152 [2024-04-27 02:42:08.738808] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.152 [2024-04-27 02:42:08.738916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.152 [2024-04-27 02:42:08.738945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.152 [2024-04-27 02:42:08.738947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.094 02:42:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:36.094 02:42:09 -- common/autotest_common.sh@850 -- # return 0 00:22:36.094 02:42:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:36.094 02:42:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:36.094 02:42:09 -- common/autotest_common.sh@10 -- # set +x 00:22:36.094 02:42:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.094 02:42:09 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:36.094 [2024-04-27 02:42:09.535077] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.094 02:42:09 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:36.355 Malloc0 00:22:36.355 02:42:09 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.355 02:42:09 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.616 02:42:10 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.877 [2024-04-27 02:42:10.258083] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.877 02:42:10 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:36.877 [2024-04-27 02:42:10.430514] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:36.877 02:42:10 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:37.138 [2024-04-27 02:42:10.599049] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:37.139 02:42:10 -- host/failover.sh@31 -- # bdevperf_pid=216393 00:22:37.139 02:42:10 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.139 02:42:10 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:37.139 02:42:10 -- host/failover.sh@34 -- # waitforlisten 216393 /var/tmp/bdevperf.sock 00:22:37.139 02:42:10 -- common/autotest_common.sh@817 -- # '[' -z 216393 ']' 00:22:37.139 02:42:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.139 02:42:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.139 02:42:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.139 02:42:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.139 02:42:10 -- common/autotest_common.sh@10 -- # set +x 00:22:38.082 02:42:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.082 02:42:11 -- common/autotest_common.sh@850 -- # return 0 00:22:38.082 02:42:11 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.082 NVMe0n1 00:22:38.082 02:42:11 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.653 00:22:38.653 02:42:12 -- host/failover.sh@39 -- # run_test_pid=216822 00:22:38.653 02:42:12 -- host/failover.sh@41 -- # sleep 1 00:22:38.653 02:42:12 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.603 02:42:13 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.906 [2024-04-27 02:42:13.228381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.906 [2024-04-27 02:42:13.228665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 [2024-04-27 02:42:13.228950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2300 is same with the state(5) to be set 00:22:39.907 02:42:13 -- host/failover.sh@45 -- # sleep 3 00:22:43.214 02:42:16 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.214 00:22:43.214 02:42:16 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:43.214 [2024-04-27 02:42:16.655195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.214 [2024-04-27 02:42:16.655659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.655996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 [2024-04-27 02:42:16.656002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f31b0 is same with the state(5) to be set 00:22:43.215 02:42:16 -- host/failover.sh@50 -- # sleep 3 00:22:46.519 02:42:19 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.519 [2024-04-27 02:42:19.830219] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.519 02:42:19 -- host/failover.sh@55 -- # sleep 1 00:22:47.466 02:42:20 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:47.466 [2024-04-27 02:42:21.011032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.466 [2024-04-27 02:42:21.011525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 [2024-04-27 02:42:21.011570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18acf30 is same with the state(5) to be set 00:22:47.467 02:42:21 -- host/failover.sh@59 -- # wait 216822 00:22:54.059 0 00:22:54.059 02:42:27 -- host/failover.sh@61 -- # killprocess 216393 00:22:54.059 02:42:27 -- common/autotest_common.sh@936 -- # '[' -z 216393 ']' 00:22:54.059 02:42:27 -- common/autotest_common.sh@940 -- # kill -0 216393 00:22:54.059 02:42:27 -- common/autotest_common.sh@941 -- # uname 00:22:54.059 02:42:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.059 02:42:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 216393 00:22:54.059 02:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:54.059 02:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:54.059 02:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 216393' 00:22:54.059 killing process with pid 216393 00:22:54.059 02:42:27 -- common/autotest_common.sh@955 -- # kill 216393 00:22:54.059 02:42:27 -- common/autotest_common.sh@960 -- # wait 216393 00:22:54.059 02:42:27 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:54.059 [2024-04-27 02:42:10.677407] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:22:54.059 [2024-04-27 02:42:10.677467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216393 ] 00:22:54.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.059 [2024-04-27 02:42:10.744821] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.059 [2024-04-27 02:42:10.806932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.059 Running I/O for 15 seconds... 00:22:54.059 [2024-04-27 02:42:13.229532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.059 [2024-04-27 02:42:13.229729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.059 [2024-04-27 02:42:13.229736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.229992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.229999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.060 [2024-04-27 02:42:13.230327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.060 [2024-04-27 02:42:13.230336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.061 [2024-04-27 02:42:13.230956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.061 [2024-04-27 02:42:13.230965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.061 [2024-04-27 02:42:13.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.230982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.230989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.230998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.062 [2024-04-27 02:42:13.231300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.062 [2024-04-27 02:42:13.231601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.062 [2024-04-27 02:42:13.231610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.063 [2024-04-27 02:42:13.231617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:13.231637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.063 [2024-04-27 02:42:13.231643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.063 [2024-04-27 02:42:13.231650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:22:54.063 [2024-04-27 02:42:13.231658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:13.231694] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22dcf10 was disconnected and freed. reset controller. 00:22:54.063 [2024-04-27 02:42:13.231702] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:54.063 [2024-04-27 02:42:13.231721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.063 [2024-04-27 02:42:13.231729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:13.231737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.063 [2024-04-27 02:42:13.231744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:13.231752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.063 [2024-04-27 02:42:13.231760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:13.231768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.063 [2024-04-27 02:42:13.231775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:13.231782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.063 [2024-04-27 02:42:13.235401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.063 [2024-04-27 02:42:13.235427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be420 (9): Bad file descriptor 00:22:54.063 [2024-04-27 02:42:13.316259] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.063 [2024-04-27 02:42:16.657080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.063 [2024-04-27 02:42:16.657622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.063 [2024-04-27 02:42:16.657630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.657990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.657999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.064 [2024-04-27 02:42:16.658133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.064 [2024-04-27 02:42:16.658142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.064 [2024-04-27 02:42:16.658149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.065 [2024-04-27 02:42:16.658167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.065 [2024-04-27 02:42:16.658183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.065 [2024-04-27 02:42:16.658200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.065 [2024-04-27 02:42:16.658215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.065 [2024-04-27 02:42:16.658231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.065 [2024-04-27 02:42:16.658247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.065 [2024-04-27 02:42:16.658780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.065 [2024-04-27 02:42:16.658789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.658989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.658998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.659005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.066 [2024-04-27 02:42:16.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105048 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105056 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105064 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105072 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105080 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105088 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105096 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105104 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105112 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.066 [2024-04-27 02:42:16.659281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.066 [2024-04-27 02:42:16.659287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105120 len:8 PRP1 0x0 PRP2 0x0 00:22:54.066 [2024-04-27 02:42:16.659294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659329] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22ca910 was disconnected and freed. reset controller. 00:22:54.066 [2024-04-27 02:42:16.659338] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:54.066 [2024-04-27 02:42:16.659356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.066 [2024-04-27 02:42:16.659364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.066 [2024-04-27 02:42:16.659380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.066 [2024-04-27 02:42:16.659394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.066 [2024-04-27 02:42:16.659408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:16.659415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.066 [2024-04-27 02:42:16.659439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be420 (9): Bad file descriptor 00:22:54.066 [2024-04-27 02:42:16.662983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.066 [2024-04-27 02:42:16.786429] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.066 [2024-04-27 02:42:21.012208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.066 [2024-04-27 02:42:21.012245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.066 [2024-04-27 02:42:21.012263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.066 [2024-04-27 02:42:21.012271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.067 [2024-04-27 02:42:21.012896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.067 [2024-04-27 02:42:21.012921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.067 [2024-04-27 02:42:21.012928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.012937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.012944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.012953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.012960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.012969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.012976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.012985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.012992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.068 [2024-04-27 02:42:21.013153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.068 [2024-04-27 02:42:21.013490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.068 [2024-04-27 02:42:21.013497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.069 [2024-04-27 02:42:21.013513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.069 [2024-04-27 02:42:21.013530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.069 [2024-04-27 02:42:21.013546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.013990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.069 [2024-04-27 02:42:21.014138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.069 [2024-04-27 02:42:21.014145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.070 [2024-04-27 02:42:21.014161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.070 [2024-04-27 02:42:21.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.070 [2024-04-27 02:42:21.014192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.070 [2024-04-27 02:42:21.014309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.070 [2024-04-27 02:42:21.014339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.070 [2024-04-27 02:42:21.014345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:22:54.070 [2024-04-27 02:42:21.014353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014390] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22e0c40 was disconnected and freed. reset controller. 00:22:54.070 [2024-04-27 02:42:21.014399] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:54.070 [2024-04-27 02:42:21.014418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.070 [2024-04-27 02:42:21.014426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.070 [2024-04-27 02:42:21.014441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.070 [2024-04-27 02:42:21.014456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.070 [2024-04-27 02:42:21.014470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.070 [2024-04-27 02:42:21.014477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.070 [2024-04-27 02:42:21.018033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.070 [2024-04-27 02:42:21.018058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be420 (9): Bad file descriptor 00:22:54.070 [2024-04-27 02:42:21.048655] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.070 00:22:54.070 Latency(us) 00:22:54.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.070 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:54.070 Verification LBA range: start 0x0 length 0x4000 00:22:54.070 NVMe0n1 : 15.01 9430.17 36.84 589.38 0.00 12745.71 1085.44 20206.93 00:22:54.070 =================================================================================================================== 00:22:54.070 Total : 9430.17 36.84 589.38 0.00 12745.71 1085.44 20206.93 00:22:54.070 Received shutdown signal, test time was about 15.000000 seconds 00:22:54.070 00:22:54.070 Latency(us) 00:22:54.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.070 =================================================================================================================== 00:22:54.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.070 02:42:27 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:54.070 02:42:27 -- host/failover.sh@65 -- # count=3 00:22:54.070 02:42:27 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:54.070 02:42:27 -- host/failover.sh@73 -- # bdevperf_pid=219838 00:22:54.070 02:42:27 -- host/failover.sh@75 -- # waitforlisten 219838 /var/tmp/bdevperf.sock 00:22:54.070 02:42:27 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:54.070 02:42:27 -- common/autotest_common.sh@817 -- # '[' -z 219838 ']' 00:22:54.070 02:42:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.070 02:42:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:54.070 02:42:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.070 02:42:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:54.070 02:42:27 -- common/autotest_common.sh@10 -- # set +x 00:22:54.641 02:42:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:54.641 02:42:28 -- common/autotest_common.sh@850 -- # return 0 00:22:54.641 02:42:28 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.902 [2024-04-27 02:42:28.397567] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.902 02:42:28 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:55.162 [2024-04-27 02:42:28.562000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:55.162 02:42:28 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.423 NVMe0n1 00:22:55.423 02:42:28 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.684 00:22:55.684 02:42:29 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:56.255 00:22:56.255 02:42:29 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.255 02:42:29 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:56.255 02:42:29 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:56.515 02:42:29 -- host/failover.sh@87 -- # sleep 3 00:22:59.817 02:42:32 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.817 02:42:32 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:59.817 02:42:33 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.817 02:42:33 -- host/failover.sh@90 -- # run_test_pid=220860 00:22:59.817 02:42:33 -- host/failover.sh@92 -- # wait 220860 00:23:00.759 0 00:23:00.759 02:42:34 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:00.759 [2024-04-27 02:42:27.479414] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:00.759 [2024-04-27 02:42:27.479473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219838 ] 00:23:00.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.759 [2024-04-27 02:42:27.538165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.759 [2024-04-27 02:42:27.600523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.759 [2024-04-27 02:42:29.917287] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:00.759 [2024-04-27 02:42:29.917336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.759 [2024-04-27 02:42:29.917346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.759 [2024-04-27 02:42:29.917356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.759 [2024-04-27 02:42:29.917364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.759 [2024-04-27 02:42:29.917372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.759 [2024-04-27 02:42:29.917379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.759 [2024-04-27 02:42:29.917387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.759 [2024-04-27 02:42:29.917394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.759 [2024-04-27 02:42:29.917401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.759 [2024-04-27 02:42:29.917429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.759 [2024-04-27 02:42:29.917443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c3420 (9): Bad file descriptor 00:23:00.759 [2024-04-27 02:42:30.061576] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.759 Running I/O for 1 seconds... 00:23:00.759 00:23:00.759 Latency(us) 00:23:00.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.759 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:00.759 Verification LBA range: start 0x0 length 0x4000 00:23:00.759 NVMe0n1 : 1.01 8419.75 32.89 0.00 0.00 15135.09 3153.92 17367.04 00:23:00.759 =================================================================================================================== 00:23:00.759 Total : 8419.75 32.89 0.00 0.00 15135.09 3153.92 17367.04 00:23:00.759 02:42:34 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.759 02:42:34 -- host/failover.sh@95 -- # grep -q NVMe0 00:23:01.020 02:42:34 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.020 02:42:34 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.020 02:42:34 -- host/failover.sh@99 -- # grep -q NVMe0 00:23:01.281 02:42:34 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.542 02:42:34 -- host/failover.sh@101 -- # sleep 3 00:23:04.848 02:42:37 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.848 02:42:37 -- host/failover.sh@103 -- # grep -q NVMe0 00:23:04.848 02:42:38 -- host/failover.sh@108 -- # killprocess 219838 00:23:04.848 02:42:38 -- common/autotest_common.sh@936 -- # '[' -z 219838 ']' 00:23:04.848 02:42:38 -- common/autotest_common.sh@940 -- # kill -0 219838 00:23:04.848 02:42:38 -- common/autotest_common.sh@941 -- # uname 00:23:04.848 02:42:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.848 02:42:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 219838 00:23:04.848 02:42:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:04.848 02:42:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:04.848 02:42:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 219838' 00:23:04.848 killing process with pid 219838 00:23:04.848 02:42:38 -- common/autotest_common.sh@955 -- # kill 219838 00:23:04.848 02:42:38 -- common/autotest_common.sh@960 -- # wait 219838 00:23:04.848 02:42:38 -- host/failover.sh@110 -- # sync 00:23:04.848 02:42:38 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.848 02:42:38 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:04.848 02:42:38 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.848 02:42:38 -- host/failover.sh@116 -- # nvmftestfini 00:23:04.848 02:42:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:04.848 02:42:38 -- nvmf/common.sh@117 -- # sync 00:23:04.848 02:42:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.848 02:42:38 -- nvmf/common.sh@120 -- # set +e 00:23:04.848 02:42:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.848 02:42:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.848 rmmod nvme_tcp 00:23:05.110 rmmod nvme_fabrics 00:23:05.110 rmmod nvme_keyring 00:23:05.110 02:42:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.110 02:42:38 -- nvmf/common.sh@124 -- # set -e 00:23:05.110 02:42:38 -- nvmf/common.sh@125 -- # return 0 00:23:05.110 02:42:38 -- nvmf/common.sh@478 -- # '[' -n 215663 ']' 00:23:05.110 02:42:38 -- nvmf/common.sh@479 -- # killprocess 215663 00:23:05.110 02:42:38 -- common/autotest_common.sh@936 -- # '[' -z 215663 ']' 00:23:05.110 02:42:38 -- common/autotest_common.sh@940 -- # kill -0 215663 00:23:05.110 02:42:38 -- common/autotest_common.sh@941 -- # uname 00:23:05.110 02:42:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.110 02:42:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 215663 00:23:05.110 02:42:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.110 02:42:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.110 02:42:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 215663' 00:23:05.110 killing process with pid 215663 00:23:05.110 02:42:38 -- common/autotest_common.sh@955 -- # kill 215663 00:23:05.110 02:42:38 -- common/autotest_common.sh@960 -- # wait 215663 00:23:05.110 02:42:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:05.110 02:42:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:05.110 02:42:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:05.110 02:42:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.110 02:42:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.110 02:42:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.110 02:42:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.371 02:42:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.291 02:42:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.291 00:23:07.291 real 0m39.214s 00:23:07.291 user 2m2.286s 00:23:07.291 sys 0m7.791s 00:23:07.291 02:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:07.291 02:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:07.291 ************************************ 00:23:07.291 END TEST nvmf_failover 00:23:07.291 ************************************ 00:23:07.291 02:42:40 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:07.291 02:42:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:07.291 02:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:07.291 02:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:07.553 ************************************ 00:23:07.553 START TEST nvmf_discovery 00:23:07.553 ************************************ 00:23:07.553 02:42:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:07.553 * Looking for test storage... 00:23:07.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.553 02:42:41 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.553 02:42:41 -- nvmf/common.sh@7 -- # uname -s 00:23:07.553 02:42:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.553 02:42:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.553 02:42:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.553 02:42:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.553 02:42:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.553 02:42:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.553 02:42:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.553 02:42:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.553 02:42:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.553 02:42:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.553 02:42:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.553 02:42:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.553 02:42:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.553 02:42:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.553 02:42:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.553 02:42:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.553 02:42:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.553 02:42:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.553 02:42:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.553 02:42:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.553 02:42:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.553 02:42:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.553 02:42:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.553 02:42:41 -- paths/export.sh@5 -- # export PATH 00:23:07.553 02:42:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.553 02:42:41 -- nvmf/common.sh@47 -- # : 0 00:23:07.553 02:42:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.554 02:42:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.554 02:42:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.554 02:42:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.554 02:42:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.554 02:42:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.554 02:42:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.554 02:42:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.554 02:42:41 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:07.554 02:42:41 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:07.554 02:42:41 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:07.554 02:42:41 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:07.554 02:42:41 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:07.554 02:42:41 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:07.554 02:42:41 -- host/discovery.sh@25 -- # nvmftestinit 00:23:07.554 02:42:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:07.554 02:42:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.554 02:42:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:07.554 02:42:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:07.554 02:42:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:07.554 02:42:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.554 02:42:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.554 02:42:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.554 02:42:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:07.554 02:42:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:07.554 02:42:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.554 02:42:41 -- common/autotest_common.sh@10 -- # set +x 00:23:15.707 02:42:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:15.707 02:42:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.707 02:42:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.707 02:42:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.707 02:42:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.707 02:42:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.707 02:42:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.707 02:42:47 -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.707 02:42:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.707 02:42:47 -- nvmf/common.sh@296 -- # e810=() 00:23:15.707 02:42:47 -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.707 02:42:47 -- nvmf/common.sh@297 -- # x722=() 00:23:15.707 02:42:47 -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.707 02:42:47 -- nvmf/common.sh@298 -- # mlx=() 00:23:15.707 02:42:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.707 02:42:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.707 02:42:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.707 02:42:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.707 02:42:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.707 02:42:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.707 02:42:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:15.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:15.707 02:42:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.707 02:42:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:15.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:15.707 02:42:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.707 02:42:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.707 02:42:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.707 02:42:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.707 02:42:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:15.707 02:42:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.707 02:42:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:15.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:15.708 02:42:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.708 02:42:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.708 02:42:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.708 02:42:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:15.708 02:42:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.708 02:42:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:15.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:15.708 02:42:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.708 02:42:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:15.708 02:42:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:15.708 02:42:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:15.708 02:42:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:15.708 02:42:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:15.708 02:42:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.708 02:42:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.708 02:42:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.708 02:42:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.708 02:42:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.708 02:42:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.708 02:42:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.708 02:42:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.708 02:42:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.708 02:42:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.708 02:42:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.708 02:42:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.708 02:42:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.708 02:42:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.708 02:42:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.708 02:42:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.708 02:42:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.708 02:42:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.708 02:42:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.708 02:42:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:23:15.708 00:23:15.708 --- 10.0.0.2 ping statistics --- 00:23:15.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.708 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:23:15.708 02:42:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.487 ms 00:23:15.708 00:23:15.708 --- 10.0.0.1 ping statistics --- 00:23:15.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.708 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:23:15.708 02:42:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.708 02:42:48 -- nvmf/common.sh@411 -- # return 0 00:23:15.708 02:42:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:15.708 02:42:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.708 02:42:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:15.708 02:42:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:15.708 02:42:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.708 02:42:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:15.708 02:42:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:15.708 02:42:48 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:15.708 02:42:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:15.708 02:42:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:15.708 02:42:48 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 02:42:48 -- nvmf/common.sh@470 -- # nvmfpid=226041 00:23:15.708 02:42:48 -- nvmf/common.sh@471 -- # waitforlisten 226041 00:23:15.708 02:42:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.708 02:42:48 -- common/autotest_common.sh@817 -- # '[' -z 226041 ']' 00:23:15.708 02:42:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.708 02:42:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:15.708 02:42:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.708 02:42:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:15.708 02:42:48 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 [2024-04-27 02:42:48.333629] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:15.708 [2024-04-27 02:42:48.333691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.708 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.708 [2024-04-27 02:42:48.405168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.708 [2024-04-27 02:42:48.478171] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.708 [2024-04-27 02:42:48.478213] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.708 [2024-04-27 02:42:48.478220] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.708 [2024-04-27 02:42:48.478227] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.708 [2024-04-27 02:42:48.478232] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.708 [2024-04-27 02:42:48.478253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.708 02:42:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:15.708 02:42:49 -- common/autotest_common.sh@850 -- # return 0 00:23:15.708 02:42:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:15.708 02:42:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 02:42:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.708 02:42:49 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.708 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 [2024-04-27 02:42:49.145308] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.708 02:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.708 02:42:49 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:15.708 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 [2024-04-27 02:42:49.153470] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:15.708 02:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.708 02:42:49 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:15.708 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 null0 00:23:15.708 02:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.708 02:42:49 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:15.708 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 null1 00:23:15.708 02:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.708 02:42:49 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:15.708 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 02:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.708 02:42:49 -- host/discovery.sh@45 -- # hostpid=226230 00:23:15.708 02:42:49 -- host/discovery.sh@46 -- # waitforlisten 226230 /tmp/host.sock 00:23:15.708 02:42:49 -- common/autotest_common.sh@817 -- # '[' -z 226230 ']' 00:23:15.708 02:42:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:15.708 02:42:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:15.708 02:42:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:15.708 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:15.708 02:42:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:15.708 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:15.708 02:42:49 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:15.708 [2024-04-27 02:42:49.231742] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:15.708 [2024-04-27 02:42:49.231789] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226230 ] 00:23:15.708 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.709 [2024-04-27 02:42:49.289235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.969 [2024-04-27 02:42:49.351582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.540 02:42:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:16.540 02:42:49 -- common/autotest_common.sh@850 -- # return 0 00:23:16.540 02:42:49 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.540 02:42:49 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:16.540 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.540 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.540 02:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.540 02:42:49 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:16.540 02:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.540 02:42:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.540 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.540 02:42:50 -- host/discovery.sh@72 -- # notify_id=0 00:23:16.540 02:42:50 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.540 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.540 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # sort 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # xargs 00:23:16.540 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.540 02:42:50 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:16.540 02:42:50 -- host/discovery.sh@84 -- # get_bdev_list 00:23:16.540 02:42:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.540 02:42:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.540 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.540 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.540 02:42:50 -- host/discovery.sh@55 -- # sort 00:23:16.540 02:42:50 -- host/discovery.sh@55 -- # xargs 00:23:16.540 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.540 02:42:50 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:16.540 02:42:50 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:16.540 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.540 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.540 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.540 02:42:50 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.540 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # sort 00:23:16.540 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.540 02:42:50 -- host/discovery.sh@59 -- # xargs 00:23:16.540 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:16.801 02:42:50 -- host/discovery.sh@88 -- # get_bdev_list 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # sort 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # xargs 00:23:16.801 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.801 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.801 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:16.801 02:42:50 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:16.801 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.801 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.801 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.801 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # sort 00:23:16.801 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # xargs 00:23:16.801 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:16.801 02:42:50 -- host/discovery.sh@92 -- # get_bdev_list 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # sort 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.801 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.801 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.801 02:42:50 -- host/discovery.sh@55 -- # xargs 00:23:16.801 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:16.801 02:42:50 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.801 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.801 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.801 [2024-04-27 02:42:50.344558] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.801 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.801 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.801 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # sort 00:23:16.801 02:42:50 -- host/discovery.sh@59 -- # xargs 00:23:16.801 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.801 02:42:50 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:16.801 02:42:50 -- host/discovery.sh@98 -- # get_bdev_list 00:23:16.802 02:42:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.802 02:42:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.802 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.802 02:42:50 -- host/discovery.sh@55 -- # sort 00:23:16.802 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.802 02:42:50 -- host/discovery.sh@55 -- # xargs 00:23:16.802 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.062 02:42:50 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:17.062 02:42:50 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:17.062 02:42:50 -- host/discovery.sh@79 -- # expected_count=0 00:23:17.062 02:42:50 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.062 02:42:50 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.062 02:42:50 -- common/autotest_common.sh@901 -- # local max=10 00:23:17.062 02:42:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:17.062 02:42:50 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.062 02:42:50 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:17.062 02:42:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:17.062 02:42:50 -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.062 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.062 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.062 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.062 02:42:50 -- host/discovery.sh@74 -- # notification_count=0 00:23:17.062 02:42:50 -- host/discovery.sh@75 -- # notify_id=0 00:23:17.062 02:42:50 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:17.062 02:42:50 -- common/autotest_common.sh@904 -- # return 0 00:23:17.062 02:42:50 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:17.062 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.062 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.062 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.062 02:42:50 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.062 02:42:50 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.062 02:42:50 -- common/autotest_common.sh@901 -- # local max=10 00:23:17.062 02:42:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:17.062 02:42:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.062 02:42:50 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:17.062 02:42:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.062 02:42:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.062 02:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.062 02:42:50 -- host/discovery.sh@59 -- # sort 00:23:17.062 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.062 02:42:50 -- host/discovery.sh@59 -- # xargs 00:23:17.062 02:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.062 02:42:50 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:17.062 02:42:50 -- common/autotest_common.sh@906 -- # sleep 1 00:23:17.633 [2024-04-27 02:42:51.054732] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.633 [2024-04-27 02:42:51.054754] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.633 [2024-04-27 02:42:51.054769] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.633 [2024-04-27 02:42:51.184175] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:17.894 [2024-04-27 02:42:51.371273] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:17.894 [2024-04-27 02:42:51.371301] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.159 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:18.160 02:42:51 -- host/discovery.sh@59 -- # sort 00:23:18.160 02:42:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:18.160 02:42:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:18.160 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.160 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.160 02:42:51 -- host/discovery.sh@59 -- # xargs 00:23:18.160 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.160 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.160 02:42:51 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.160 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:18.160 02:42:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.160 02:42:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.160 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.160 02:42:51 -- host/discovery.sh@55 -- # sort 00:23:18.160 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.160 02:42:51 -- host/discovery.sh@55 -- # xargs 00:23:18.160 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:18.160 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.160 02:42:51 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.160 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:18.160 02:42:51 -- host/discovery.sh@63 -- # sort -n 00:23:18.160 02:42:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:18.160 02:42:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.160 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.160 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.160 02:42:51 -- host/discovery.sh@63 -- # xargs 00:23:18.160 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.160 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.160 02:42:51 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:18.160 02:42:51 -- host/discovery.sh@79 -- # expected_count=1 00:23:18.160 02:42:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:18.160 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:18.160 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.160 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:18.160 02:42:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:18.160 02:42:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:18.160 02:42:51 -- host/discovery.sh@74 -- # jq '. | length' 00:23:18.160 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.160 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.160 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:51 -- host/discovery.sh@74 -- # notification_count=1 00:23:18.422 02:42:51 -- host/discovery.sh@75 -- # notify_id=1 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:18.422 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.422 02:42:51 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:18.422 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:51 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.422 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.422 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # sort 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # xargs 00:23:18.422 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.422 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.422 02:42:51 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:18.422 02:42:51 -- host/discovery.sh@79 -- # expected_count=1 00:23:18.422 02:42:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:18.422 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:18.422 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.422 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:18.422 02:42:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:18.422 02:42:51 -- host/discovery.sh@74 -- # jq '. | length' 00:23:18.422 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:51 -- host/discovery.sh@74 -- # notification_count=1 00:23:18.422 02:42:51 -- host/discovery.sh@75 -- # notify_id=2 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:18.422 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.422 02:42:51 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:18.422 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 [2024-04-27 02:42:51.900837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.422 [2024-04-27 02:42:51.901455] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:18.422 [2024-04-27 02:42:51.901480] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.422 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:51 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.422 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:18.422 02:42:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:18.422 02:42:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:18.422 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:51 -- host/discovery.sh@59 -- # sort 00:23:18.422 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 02:42:51 -- host/discovery.sh@59 -- # xargs 00:23:18.422 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.422 02:42:51 -- common/autotest_common.sh@904 -- # return 0 00:23:18.422 02:42:51 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.422 02:42:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:18.422 02:42:51 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # sort 00:23:18.422 02:42:51 -- host/discovery.sh@55 -- # xargs 00:23:18.422 02:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 02:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 02:42:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.422 02:42:52 -- common/autotest_common.sh@904 -- # return 0 00:23:18.422 02:42:52 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:18.422 02:42:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:18.422 02:42:52 -- common/autotest_common.sh@901 -- # local max=10 00:23:18.422 02:42:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:18.422 02:42:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:18.422 02:42:52 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:18.422 02:42:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:18.422 02:42:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.422 02:42:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.422 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:23:18.422 02:42:52 -- host/discovery.sh@63 -- # sort -n 00:23:18.422 02:42:52 -- host/discovery.sh@63 -- # xargs 00:23:18.422 02:42:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.422 [2024-04-27 02:42:52.030870] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:18.682 02:42:52 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:18.682 02:42:52 -- common/autotest_common.sh@906 -- # sleep 1 00:23:18.682 [2024-04-27 02:42:52.131911] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.682 [2024-04-27 02:42:52.131932] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.682 [2024-04-27 02:42:52.131938] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.622 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:19.622 02:42:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.622 02:42:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.622 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.622 02:42:53 -- host/discovery.sh@63 -- # sort -n 00:23:19.622 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.622 02:42:53 -- host/discovery.sh@63 -- # xargs 00:23:19.622 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:19.622 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.622 02:42:53 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:19.622 02:42:53 -- host/discovery.sh@79 -- # expected_count=0 00:23:19.622 02:42:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.622 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.622 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.622 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:19.622 02:42:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.622 02:42:53 -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.622 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.622 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.622 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.622 02:42:53 -- host/discovery.sh@74 -- # notification_count=0 00:23:19.622 02:42:53 -- host/discovery.sh@75 -- # notify_id=2 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:19.622 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.622 02:42:53 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:19.622 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.622 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.622 [2024-04-27 02:42:53.180763] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.622 [2024-04-27 02:42:53.180785] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.622 [2024-04-27 02:42:53.182842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.622 [2024-04-27 02:42:53.182861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.622 [2024-04-27 02:42:53.182870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.622 [2024-04-27 02:42:53.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.622 [2024-04-27 02:42:53.182886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.622 [2024-04-27 02:42:53.182893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.622 [2024-04-27 02:42:53.182900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.622 [2024-04-27 02:42:53.182908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.622 [2024-04-27 02:42:53.182914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.622 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.622 02:42:53 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.622 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.622 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.622 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:19.622 02:42:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:19.622 02:42:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.622 02:42:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.622 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.622 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.622 02:42:53 -- host/discovery.sh@59 -- # sort 00:23:19.622 02:42:53 -- host/discovery.sh@59 -- # xargs 00:23:19.622 [2024-04-27 02:42:53.192856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.622 [2024-04-27 02:42:53.202897] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.622 [2024-04-27 02:42:53.203243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.203784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.203822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.622 [2024-04-27 02:42:53.203833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.622 [2024-04-27 02:42:53.203851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.622 [2024-04-27 02:42:53.203879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.622 [2024-04-27 02:42:53.203888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.622 [2024-04-27 02:42:53.203897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.622 [2024-04-27 02:42:53.203914] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.622 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.622 [2024-04-27 02:42:53.212955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.622 [2024-04-27 02:42:53.213575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.214111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.214125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.622 [2024-04-27 02:42:53.214135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.622 [2024-04-27 02:42:53.214153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.622 [2024-04-27 02:42:53.214179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.622 [2024-04-27 02:42:53.214188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.622 [2024-04-27 02:42:53.214196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.622 [2024-04-27 02:42:53.214210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.622 [2024-04-27 02:42:53.223010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.622 [2024-04-27 02:42:53.223665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.224067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.224081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.622 [2024-04-27 02:42:53.224090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.622 [2024-04-27 02:42:53.224108] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.622 [2024-04-27 02:42:53.224167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.622 [2024-04-27 02:42:53.224178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.622 [2024-04-27 02:42:53.224191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.622 [2024-04-27 02:42:53.224206] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.622 [2024-04-27 02:42:53.233064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.622 [2024-04-27 02:42:53.233484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.234016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.622 [2024-04-27 02:42:53.234030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.622 [2024-04-27 02:42:53.234039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.622 [2024-04-27 02:42:53.234057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.622 [2024-04-27 02:42:53.234084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.622 [2024-04-27 02:42:53.234092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.622 [2024-04-27 02:42:53.234100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.622 [2024-04-27 02:42:53.234115] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.884 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.884 02:42:53 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.884 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.884 [2024-04-27 02:42:53.243120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.884 [2024-04-27 02:42:53.243546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.884 [2024-04-27 02:42:53.244087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.884 [2024-04-27 02:42:53.244101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.884 [2024-04-27 02:42:53.244110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.884 [2024-04-27 02:42:53.244128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.884 [2024-04-27 02:42:53.244156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.884 [2024-04-27 02:42:53.244165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.884 [2024-04-27 02:42:53.244172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.884 [2024-04-27 02:42:53.244187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.884 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # sort 00:23:19.884 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # xargs 00:23:19.884 [2024-04-27 02:42:53.253176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.884 [2024-04-27 02:42:53.253664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.884 [2024-04-27 02:42:53.254171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.884 [2024-04-27 02:42:53.254182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.884 [2024-04-27 02:42:53.254190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.884 [2024-04-27 02:42:53.254202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.884 [2024-04-27 02:42:53.254220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.884 [2024-04-27 02:42:53.254226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.884 [2024-04-27 02:42:53.254234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.884 [2024-04-27 02:42:53.254244] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.884 [2024-04-27 02:42:53.263238] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.884 [2024-04-27 02:42:53.263696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.884 [2024-04-27 02:42:53.263983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.884 [2024-04-27 02:42:53.263993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f1c00 with addr=10.0.0.2, port=4420 00:23:19.884 [2024-04-27 02:42:53.264001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1c00 is same with the state(5) to be set 00:23:19.884 [2024-04-27 02:42:53.264012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1c00 (9): Bad file descriptor 00:23:19.884 [2024-04-27 02:42:53.264022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.884 [2024-04-27 02:42:53.264028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.884 [2024-04-27 02:42:53.264035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.884 [2024-04-27 02:42:53.264046] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.884 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.884 [2024-04-27 02:42:53.270274] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:19.884 [2024-04-27 02:42:53.270297] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.884 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.884 02:42:53 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.884 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:19.884 02:42:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.884 02:42:53 -- host/discovery.sh@63 -- # xargs 00:23:19.884 02:42:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.884 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.884 02:42:53 -- host/discovery.sh@63 -- # sort -n 00:23:19.884 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.884 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.884 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.884 02:42:53 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:19.884 02:42:53 -- host/discovery.sh@79 -- # expected_count=0 00:23:19.884 02:42:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.884 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.884 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.884 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:19.884 02:42:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.884 02:42:53 -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.884 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.884 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.884 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.884 02:42:53 -- host/discovery.sh@74 -- # notification_count=0 00:23:19.884 02:42:53 -- host/discovery.sh@75 -- # notify_id=2 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:19.884 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.884 02:42:53 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:19.884 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.884 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.884 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.884 02:42:53 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.884 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:19.884 02:42:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.884 02:42:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.884 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.884 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.884 02:42:53 -- host/discovery.sh@59 -- # sort 00:23:19.884 02:42:53 -- host/discovery.sh@59 -- # xargs 00:23:19.884 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:19.884 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:19.884 02:42:53 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:19.884 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:19.884 02:42:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # sort 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.884 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.884 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:19.884 02:42:53 -- host/discovery.sh@55 -- # xargs 00:23:19.884 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.146 02:42:53 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:20.146 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:20.146 02:42:53 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:20.146 02:42:53 -- host/discovery.sh@79 -- # expected_count=2 00:23:20.146 02:42:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:20.146 02:42:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:20.146 02:42:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:20.146 02:42:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:20.146 02:42:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:20.146 02:42:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:20.146 02:42:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:20.147 02:42:53 -- host/discovery.sh@74 -- # jq '. | length' 00:23:20.147 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.147 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:20.147 02:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.147 02:42:53 -- host/discovery.sh@74 -- # notification_count=2 00:23:20.147 02:42:53 -- host/discovery.sh@75 -- # notify_id=4 00:23:20.147 02:42:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:20.147 02:42:53 -- common/autotest_common.sh@904 -- # return 0 00:23:20.147 02:42:53 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.147 02:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.147 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:21.142 [2024-04-27 02:42:54.592525] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:21.142 [2024-04-27 02:42:54.592542] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:21.142 [2024-04-27 02:42:54.592556] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.142 [2024-04-27 02:42:54.680849] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:21.410 [2024-04-27 02:42:54.953776] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:21.410 [2024-04-27 02:42:54.953806] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:21.410 02:42:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.410 02:42:54 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.410 02:42:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:21.410 02:42:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.410 02:42:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:21.410 02:42:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.410 02:42:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:21.410 02:42:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.410 02:42:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.410 02:42:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.410 02:42:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.410 request: 00:23:21.410 { 00:23:21.410 "name": "nvme", 00:23:21.410 "trtype": "tcp", 00:23:21.410 "traddr": "10.0.0.2", 00:23:21.410 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:21.410 "adrfam": "ipv4", 00:23:21.410 "trsvcid": "8009", 00:23:21.410 "wait_for_attach": true, 00:23:21.410 "method": "bdev_nvme_start_discovery", 00:23:21.410 "req_id": 1 00:23:21.410 } 00:23:21.410 Got JSON-RPC error response 00:23:21.410 response: 00:23:21.410 { 00:23:21.410 "code": -17, 00:23:21.410 "message": "File exists" 00:23:21.410 } 00:23:21.410 02:42:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:21.410 02:42:54 -- common/autotest_common.sh@641 -- # es=1 00:23:21.410 02:42:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:21.410 02:42:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:21.410 02:42:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:21.410 02:42:54 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:21.410 02:42:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:21.410 02:42:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:21.410 02:42:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.410 02:42:54 -- host/discovery.sh@67 -- # sort 00:23:21.410 02:42:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.410 02:42:54 -- host/discovery.sh@67 -- # xargs 00:23:21.410 02:42:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.410 02:42:55 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:21.410 02:42:55 -- host/discovery.sh@146 -- # get_bdev_list 00:23:21.410 02:42:55 -- host/discovery.sh@55 -- # sort 00:23:21.410 02:42:55 -- host/discovery.sh@55 -- # xargs 00:23:21.410 02:42:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.410 02:42:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.410 02:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.410 02:42:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.670 02:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.671 02:42:55 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:21.671 02:42:55 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.671 02:42:55 -- common/autotest_common.sh@638 -- # local es=0 00:23:21.671 02:42:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.671 02:42:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:21.671 02:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.671 02:42:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:21.671 02:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.671 02:42:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.671 02:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.671 02:42:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.671 request: 00:23:21.671 { 00:23:21.671 "name": "nvme_second", 00:23:21.671 "trtype": "tcp", 00:23:21.671 "traddr": "10.0.0.2", 00:23:21.671 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:21.671 "adrfam": "ipv4", 00:23:21.671 "trsvcid": "8009", 00:23:21.671 "wait_for_attach": true, 00:23:21.671 "method": "bdev_nvme_start_discovery", 00:23:21.671 "req_id": 1 00:23:21.671 } 00:23:21.671 Got JSON-RPC error response 00:23:21.671 response: 00:23:21.671 { 00:23:21.671 "code": -17, 00:23:21.671 "message": "File exists" 00:23:21.671 } 00:23:21.671 02:42:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:21.671 02:42:55 -- common/autotest_common.sh@641 -- # es=1 00:23:21.671 02:42:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:21.671 02:42:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:21.671 02:42:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:21.671 02:42:55 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:21.671 02:42:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:21.671 02:42:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:21.671 02:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.671 02:42:55 -- host/discovery.sh@67 -- # sort 00:23:21.671 02:42:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.671 02:42:55 -- host/discovery.sh@67 -- # xargs 00:23:21.671 02:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.671 02:42:55 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:21.671 02:42:55 -- host/discovery.sh@152 -- # get_bdev_list 00:23:21.671 02:42:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.671 02:42:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.671 02:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.671 02:42:55 -- host/discovery.sh@55 -- # sort 00:23:21.671 02:42:55 -- common/autotest_common.sh@10 -- # set +x 00:23:21.671 02:42:55 -- host/discovery.sh@55 -- # xargs 00:23:21.671 02:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.671 02:42:55 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:21.671 02:42:55 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:21.671 02:42:55 -- common/autotest_common.sh@638 -- # local es=0 00:23:21.671 02:42:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:21.671 02:42:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:21.671 02:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.671 02:42:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:21.671 02:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.671 02:42:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:21.671 02:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.671 02:42:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.611 [2024-04-27 02:42:56.209391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.611 [2024-04-27 02:42:56.209874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.611 [2024-04-27 02:42:56.209886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f3540 with addr=10.0.0.2, port=8010 00:23:22.611 [2024-04-27 02:42:56.209898] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:22.611 [2024-04-27 02:42:56.209906] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:22.611 [2024-04-27 02:42:56.209913] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:23.993 [2024-04-27 02:42:57.211786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.993 [2024-04-27 02:42:57.212282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.993 [2024-04-27 02:42:57.212294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f3540 with addr=10.0.0.2, port=8010 00:23:23.993 [2024-04-27 02:42:57.212305] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:23.993 [2024-04-27 02:42:57.212312] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:23.993 [2024-04-27 02:42:57.212318] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:24.937 [2024-04-27 02:42:58.213602] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:24.937 request: 00:23:24.937 { 00:23:24.937 "name": "nvme_second", 00:23:24.937 "trtype": "tcp", 00:23:24.937 "traddr": "10.0.0.2", 00:23:24.937 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:24.937 "adrfam": "ipv4", 00:23:24.937 "trsvcid": "8010", 00:23:24.937 "attach_timeout_ms": 3000, 00:23:24.937 "method": "bdev_nvme_start_discovery", 00:23:24.937 "req_id": 1 00:23:24.937 } 00:23:24.937 Got JSON-RPC error response 00:23:24.937 response: 00:23:24.937 { 00:23:24.937 "code": -110, 00:23:24.937 "message": "Connection timed out" 00:23:24.937 } 00:23:24.937 02:42:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:24.937 02:42:58 -- common/autotest_common.sh@641 -- # es=1 00:23:24.937 02:42:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:24.937 02:42:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:24.937 02:42:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:24.937 02:42:58 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:24.937 02:42:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:24.937 02:42:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:24.937 02:42:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.937 02:42:58 -- host/discovery.sh@67 -- # sort 00:23:24.937 02:42:58 -- common/autotest_common.sh@10 -- # set +x 00:23:24.937 02:42:58 -- host/discovery.sh@67 -- # xargs 00:23:24.937 02:42:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.937 02:42:58 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:24.937 02:42:58 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:24.937 02:42:58 -- host/discovery.sh@161 -- # kill 226230 00:23:24.937 02:42:58 -- host/discovery.sh@162 -- # nvmftestfini 00:23:24.937 02:42:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:24.937 02:42:58 -- nvmf/common.sh@117 -- # sync 00:23:24.937 02:42:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.937 02:42:58 -- nvmf/common.sh@120 -- # set +e 00:23:24.937 02:42:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.937 02:42:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.937 rmmod nvme_tcp 00:23:24.937 rmmod nvme_fabrics 00:23:24.937 rmmod nvme_keyring 00:23:24.937 02:42:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.937 02:42:58 -- nvmf/common.sh@124 -- # set -e 00:23:24.937 02:42:58 -- nvmf/common.sh@125 -- # return 0 00:23:24.937 02:42:58 -- nvmf/common.sh@478 -- # '[' -n 226041 ']' 00:23:24.937 02:42:58 -- nvmf/common.sh@479 -- # killprocess 226041 00:23:24.937 02:42:58 -- common/autotest_common.sh@936 -- # '[' -z 226041 ']' 00:23:24.937 02:42:58 -- common/autotest_common.sh@940 -- # kill -0 226041 00:23:24.937 02:42:58 -- common/autotest_common.sh@941 -- # uname 00:23:24.937 02:42:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:24.937 02:42:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 226041 00:23:24.937 02:42:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:24.937 02:42:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:24.937 02:42:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 226041' 00:23:24.937 killing process with pid 226041 00:23:24.937 02:42:58 -- common/autotest_common.sh@955 -- # kill 226041 00:23:24.937 02:42:58 -- common/autotest_common.sh@960 -- # wait 226041 00:23:24.937 02:42:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:24.937 02:42:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:24.937 02:42:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:24.937 02:42:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.937 02:42:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.937 02:42:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.937 02:42:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.937 02:42:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.484 02:43:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.484 00:23:27.484 real 0m19.624s 00:23:27.484 user 0m23.051s 00:23:27.484 sys 0m6.641s 00:23:27.484 02:43:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.484 02:43:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.484 ************************************ 00:23:27.484 END TEST nvmf_discovery 00:23:27.484 ************************************ 00:23:27.484 02:43:00 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:27.484 02:43:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:27.484 02:43:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.484 02:43:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.484 ************************************ 00:23:27.484 START TEST nvmf_discovery_remove_ifc 00:23:27.484 ************************************ 00:23:27.484 02:43:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:27.484 * Looking for test storage... 00:23:27.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.484 02:43:00 -- nvmf/common.sh@7 -- # uname -s 00:23:27.484 02:43:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.484 02:43:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.484 02:43:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.484 02:43:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.484 02:43:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.484 02:43:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.484 02:43:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.484 02:43:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.484 02:43:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.484 02:43:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.484 02:43:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.484 02:43:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.484 02:43:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.484 02:43:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.484 02:43:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.484 02:43:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.484 02:43:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.484 02:43:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.484 02:43:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.484 02:43:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.484 02:43:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.484 02:43:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.484 02:43:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.484 02:43:00 -- paths/export.sh@5 -- # export PATH 00:23:27.484 02:43:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.484 02:43:00 -- nvmf/common.sh@47 -- # : 0 00:23:27.484 02:43:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.484 02:43:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.484 02:43:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.484 02:43:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.484 02:43:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.484 02:43:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.484 02:43:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.484 02:43:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:27.484 02:43:00 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:27.484 02:43:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:27.484 02:43:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.484 02:43:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:27.484 02:43:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:27.484 02:43:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:27.484 02:43:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.484 02:43:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.484 02:43:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.484 02:43:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:27.484 02:43:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:27.484 02:43:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.484 02:43:00 -- common/autotest_common.sh@10 -- # set +x 00:23:35.627 02:43:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:35.627 02:43:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.627 02:43:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.627 02:43:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.627 02:43:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.627 02:43:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.627 02:43:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.627 02:43:07 -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.627 02:43:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.627 02:43:07 -- nvmf/common.sh@296 -- # e810=() 00:23:35.627 02:43:07 -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.627 02:43:07 -- nvmf/common.sh@297 -- # x722=() 00:23:35.627 02:43:07 -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.627 02:43:07 -- nvmf/common.sh@298 -- # mlx=() 00:23:35.627 02:43:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.627 02:43:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.627 02:43:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.627 02:43:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.627 02:43:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.627 02:43:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.627 02:43:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:35.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:35.627 02:43:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.627 02:43:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:35.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:35.627 02:43:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.627 02:43:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.627 02:43:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.627 02:43:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:35.627 02:43:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.627 02:43:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:35.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:35.627 02:43:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.627 02:43:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.627 02:43:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.627 02:43:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:35.627 02:43:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.627 02:43:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:35.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:35.627 02:43:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.627 02:43:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:35.627 02:43:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:35.627 02:43:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:35.627 02:43:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:35.627 02:43:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.627 02:43:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.627 02:43:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.627 02:43:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.627 02:43:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.627 02:43:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.627 02:43:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.627 02:43:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.627 02:43:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.627 02:43:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.627 02:43:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.627 02:43:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.627 02:43:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.627 02:43:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.627 02:43:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.627 02:43:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.627 02:43:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.627 02:43:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.627 02:43:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.627 02:43:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:23:35.627 00:23:35.627 --- 10.0.0.2 ping statistics --- 00:23:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.627 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:23:35.627 02:43:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:23:35.627 00:23:35.627 --- 10.0.0.1 ping statistics --- 00:23:35.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.627 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:23:35.627 02:43:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.627 02:43:08 -- nvmf/common.sh@411 -- # return 0 00:23:35.627 02:43:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:35.627 02:43:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.627 02:43:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:35.627 02:43:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:35.627 02:43:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.627 02:43:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:35.627 02:43:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:35.627 02:43:08 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:35.627 02:43:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:35.627 02:43:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:35.627 02:43:08 -- common/autotest_common.sh@10 -- # set +x 00:23:35.627 02:43:08 -- nvmf/common.sh@470 -- # nvmfpid=232404 00:23:35.627 02:43:08 -- nvmf/common.sh@471 -- # waitforlisten 232404 00:23:35.627 02:43:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.627 02:43:08 -- common/autotest_common.sh@817 -- # '[' -z 232404 ']' 00:23:35.627 02:43:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.627 02:43:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:35.627 02:43:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.627 02:43:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:35.627 02:43:08 -- common/autotest_common.sh@10 -- # set +x 00:23:35.627 [2024-04-27 02:43:08.256159] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:35.628 [2024-04-27 02:43:08.256241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.628 [2024-04-27 02:43:08.326668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.628 [2024-04-27 02:43:08.388708] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.628 [2024-04-27 02:43:08.388751] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.628 [2024-04-27 02:43:08.388759] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.628 [2024-04-27 02:43:08.388765] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.628 [2024-04-27 02:43:08.388771] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.628 [2024-04-27 02:43:08.388789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.628 02:43:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:35.628 02:43:09 -- common/autotest_common.sh@850 -- # return 0 00:23:35.628 02:43:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:35.628 02:43:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:35.628 02:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:35.628 02:43:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.628 02:43:09 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:35.628 02:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.628 02:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:35.628 [2024-04-27 02:43:09.059400] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.628 [2024-04-27 02:43:09.067522] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:35.628 null0 00:23:35.628 [2024-04-27 02:43:09.099536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.628 02:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.628 02:43:09 -- host/discovery_remove_ifc.sh@59 -- # hostpid=232467 00:23:35.628 02:43:09 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 232467 /tmp/host.sock 00:23:35.628 02:43:09 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:35.628 02:43:09 -- common/autotest_common.sh@817 -- # '[' -z 232467 ']' 00:23:35.628 02:43:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:35.628 02:43:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:35.628 02:43:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:35.628 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:35.628 02:43:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:35.628 02:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:35.628 [2024-04-27 02:43:09.168632] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:35.628 [2024-04-27 02:43:09.168680] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232467 ] 00:23:35.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.628 [2024-04-27 02:43:09.226387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.889 [2024-04-27 02:43:09.289949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.460 02:43:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:36.460 02:43:09 -- common/autotest_common.sh@850 -- # return 0 00:23:36.460 02:43:09 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.460 02:43:09 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:36.460 02:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.460 02:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:36.460 02:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.460 02:43:09 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:36.460 02:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.460 02:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:36.460 02:43:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.460 02:43:10 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:36.460 02:43:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.460 02:43:10 -- common/autotest_common.sh@10 -- # set +x 00:23:37.847 [2024-04-27 02:43:11.079545] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:37.847 [2024-04-27 02:43:11.079569] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:37.847 [2024-04-27 02:43:11.079584] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.847 [2024-04-27 02:43:11.165850] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:37.847 [2024-04-27 02:43:11.394186] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:37.847 [2024-04-27 02:43:11.394234] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:37.847 [2024-04-27 02:43:11.394255] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:37.847 [2024-04-27 02:43:11.394269] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.847 [2024-04-27 02:43:11.394296] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.847 02:43:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.847 [2024-04-27 02:43:11.398482] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc0b400 was disconnected and freed. delete nvme_qpair. 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.847 02:43:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.847 02:43:11 -- common/autotest_common.sh@10 -- # set +x 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.847 02:43:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:37.847 02:43:11 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.109 02:43:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.109 02:43:11 -- common/autotest_common.sh@10 -- # set +x 00:23:38.109 02:43:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:38.109 02:43:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.068 02:43:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.068 02:43:12 -- common/autotest_common.sh@10 -- # set +x 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.068 02:43:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.068 02:43:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.454 02:43:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.454 02:43:13 -- common/autotest_common.sh@10 -- # set +x 00:23:40.454 02:43:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.454 02:43:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.398 02:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.398 02:43:14 -- common/autotest_common.sh@10 -- # set +x 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.398 02:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.398 02:43:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.341 02:43:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.341 02:43:15 -- common/autotest_common.sh@10 -- # set +x 00:23:42.341 02:43:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.341 02:43:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:43.284 [2024-04-27 02:43:16.834666] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:43.284 [2024-04-27 02:43:16.834711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.284 [2024-04-27 02:43:16.834723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.284 [2024-04-27 02:43:16.834733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.284 [2024-04-27 02:43:16.834741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.284 [2024-04-27 02:43:16.834749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.284 [2024-04-27 02:43:16.834756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.284 [2024-04-27 02:43:16.834764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.284 [2024-04-27 02:43:16.834771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.284 [2024-04-27 02:43:16.834779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.284 [2024-04-27 02:43:16.834787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.284 [2024-04-27 02:43:16.834794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd18e0 is same with the state(5) to be set 00:23:43.284 [2024-04-27 02:43:16.844686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd18e0 (9): Bad file descriptor 00:23:43.284 [2024-04-27 02:43:16.854726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.284 02:43:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.284 02:43:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.284 02:43:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.284 02:43:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.284 02:43:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.284 02:43:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.284 02:43:16 -- common/autotest_common.sh@10 -- # set +x 00:23:44.671 [2024-04-27 02:43:17.892378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:45.614 [2024-04-27 02:43:18.916319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:45.614 [2024-04-27 02:43:18.916368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd18e0 with addr=10.0.0.2, port=4420 00:23:45.614 [2024-04-27 02:43:18.916382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd18e0 is same with the state(5) to be set 00:23:45.614 [2024-04-27 02:43:18.916774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd18e0 (9): Bad file descriptor 00:23:45.614 [2024-04-27 02:43:18.916798] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.614 [2024-04-27 02:43:18.916819] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:45.614 [2024-04-27 02:43:18.916842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.614 [2024-04-27 02:43:18.916853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.614 [2024-04-27 02:43:18.916863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.614 [2024-04-27 02:43:18.916874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.614 [2024-04-27 02:43:18.916882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.614 [2024-04-27 02:43:18.916889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.614 [2024-04-27 02:43:18.916897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.614 [2024-04-27 02:43:18.916904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.614 [2024-04-27 02:43:18.916912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.614 [2024-04-27 02:43:18.916919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.614 [2024-04-27 02:43:18.916927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:45.614 [2024-04-27 02:43:18.917408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd1cf0 (9): Bad file descriptor 00:23:45.614 [2024-04-27 02:43:18.918419] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:45.614 [2024-04-27 02:43:18.918430] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:45.614 02:43:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.614 02:43:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:45.614 02:43:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.557 02:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.557 02:43:19 -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.557 02:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:46.557 02:43:19 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.557 02:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.557 02:43:20 -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 02:43:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.557 02:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.558 02:43:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.558 02:43:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.500 [2024-04-27 02:43:20.974611] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:47.500 [2024-04-27 02:43:20.974633] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:47.500 [2024-04-27 02:43:20.974648] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:47.500 [2024-04-27 02:43:21.101054] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:47.760 02:43:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.760 02:43:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.760 02:43:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.760 02:43:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.760 02:43:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.760 02:43:21 -- common/autotest_common.sh@10 -- # set +x 00:23:47.761 02:43:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.761 [2024-04-27 02:43:21.162989] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:47.761 [2024-04-27 02:43:21.163030] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:47.761 [2024-04-27 02:43:21.163051] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:47.761 [2024-04-27 02:43:21.163065] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:47.761 [2024-04-27 02:43:21.163073] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:47.761 [2024-04-27 02:43:21.173130] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc15da0 was disconnected and freed. delete nvme_qpair. 00:23:47.761 02:43:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.761 02:43:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:47.761 02:43:21 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:47.761 02:43:21 -- host/discovery_remove_ifc.sh@90 -- # killprocess 232467 00:23:47.761 02:43:21 -- common/autotest_common.sh@936 -- # '[' -z 232467 ']' 00:23:47.761 02:43:21 -- common/autotest_common.sh@940 -- # kill -0 232467 00:23:47.761 02:43:21 -- common/autotest_common.sh@941 -- # uname 00:23:47.761 02:43:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:47.761 02:43:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 232467 00:23:47.761 02:43:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:47.761 02:43:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:47.761 02:43:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 232467' 00:23:47.761 killing process with pid 232467 00:23:47.761 02:43:21 -- common/autotest_common.sh@955 -- # kill 232467 00:23:47.761 02:43:21 -- common/autotest_common.sh@960 -- # wait 232467 00:23:48.021 02:43:21 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:48.021 02:43:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:48.021 02:43:21 -- nvmf/common.sh@117 -- # sync 00:23:48.021 02:43:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.021 02:43:21 -- nvmf/common.sh@120 -- # set +e 00:23:48.021 02:43:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.021 02:43:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.021 rmmod nvme_tcp 00:23:48.021 rmmod nvme_fabrics 00:23:48.021 rmmod nvme_keyring 00:23:48.021 02:43:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.021 02:43:21 -- nvmf/common.sh@124 -- # set -e 00:23:48.021 02:43:21 -- nvmf/common.sh@125 -- # return 0 00:23:48.021 02:43:21 -- nvmf/common.sh@478 -- # '[' -n 232404 ']' 00:23:48.021 02:43:21 -- nvmf/common.sh@479 -- # killprocess 232404 00:23:48.021 02:43:21 -- common/autotest_common.sh@936 -- # '[' -z 232404 ']' 00:23:48.021 02:43:21 -- common/autotest_common.sh@940 -- # kill -0 232404 00:23:48.021 02:43:21 -- common/autotest_common.sh@941 -- # uname 00:23:48.021 02:43:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:48.021 02:43:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 232404 00:23:48.021 02:43:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:48.021 02:43:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:48.021 02:43:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 232404' 00:23:48.021 killing process with pid 232404 00:23:48.021 02:43:21 -- common/autotest_common.sh@955 -- # kill 232404 00:23:48.021 02:43:21 -- common/autotest_common.sh@960 -- # wait 232404 00:23:48.281 02:43:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:48.281 02:43:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:48.281 02:43:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:48.281 02:43:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.281 02:43:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.281 02:43:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.281 02:43:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.281 02:43:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.221 02:43:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.221 00:23:50.221 real 0m22.941s 00:23:50.221 user 0m26.238s 00:23:50.221 sys 0m6.483s 00:23:50.221 02:43:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:50.221 02:43:23 -- common/autotest_common.sh@10 -- # set +x 00:23:50.221 ************************************ 00:23:50.221 END TEST nvmf_discovery_remove_ifc 00:23:50.221 ************************************ 00:23:50.221 02:43:23 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.221 02:43:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:50.221 02:43:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:50.221 02:43:23 -- common/autotest_common.sh@10 -- # set +x 00:23:50.483 ************************************ 00:23:50.483 START TEST nvmf_identify_kernel_target 00:23:50.483 ************************************ 00:23:50.483 02:43:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.483 * Looking for test storage... 00:23:50.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.483 02:43:24 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.483 02:43:24 -- nvmf/common.sh@7 -- # uname -s 00:23:50.483 02:43:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.483 02:43:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.483 02:43:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.483 02:43:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.483 02:43:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.483 02:43:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.483 02:43:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.483 02:43:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.483 02:43:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.483 02:43:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.483 02:43:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.483 02:43:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.483 02:43:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.483 02:43:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.483 02:43:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.483 02:43:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.483 02:43:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.483 02:43:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.483 02:43:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.483 02:43:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.483 02:43:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.483 02:43:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.483 02:43:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.483 02:43:24 -- paths/export.sh@5 -- # export PATH 00:23:50.483 02:43:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.483 02:43:24 -- nvmf/common.sh@47 -- # : 0 00:23:50.483 02:43:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.483 02:43:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.483 02:43:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.483 02:43:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.483 02:43:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.483 02:43:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.483 02:43:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.483 02:43:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.483 02:43:24 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:50.483 02:43:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:50.483 02:43:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.483 02:43:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:50.483 02:43:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:50.483 02:43:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:50.483 02:43:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.483 02:43:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.483 02:43:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.483 02:43:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:50.483 02:43:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:50.483 02:43:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.483 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:23:58.638 02:43:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:58.638 02:43:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.638 02:43:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.638 02:43:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.638 02:43:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.638 02:43:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.638 02:43:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.638 02:43:30 -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.638 02:43:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.638 02:43:30 -- nvmf/common.sh@296 -- # e810=() 00:23:58.638 02:43:30 -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.638 02:43:30 -- nvmf/common.sh@297 -- # x722=() 00:23:58.638 02:43:30 -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.638 02:43:30 -- nvmf/common.sh@298 -- # mlx=() 00:23:58.638 02:43:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.638 02:43:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.638 02:43:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.638 02:43:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.638 02:43:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.638 02:43:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.638 02:43:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:58.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:58.638 02:43:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.638 02:43:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:58.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:58.638 02:43:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.638 02:43:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.638 02:43:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.638 02:43:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.638 02:43:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:58.638 02:43:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.638 02:43:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:58.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:58.638 02:43:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.638 02:43:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.638 02:43:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.638 02:43:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:58.638 02:43:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.638 02:43:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:58.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:58.638 02:43:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.638 02:43:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:58.638 02:43:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:58.638 02:43:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:58.639 02:43:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:58.639 02:43:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:58.639 02:43:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.639 02:43:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.639 02:43:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.639 02:43:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.639 02:43:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.639 02:43:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.639 02:43:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.639 02:43:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.639 02:43:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.639 02:43:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.639 02:43:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.639 02:43:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.639 02:43:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.639 02:43:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.639 02:43:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.639 02:43:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.639 02:43:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.639 02:43:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.639 02:43:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.639 02:43:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:23:58.639 00:23:58.639 --- 10.0.0.2 ping statistics --- 00:23:58.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.639 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:23:58.639 02:43:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:23:58.639 00:23:58.639 --- 10.0.0.1 ping statistics --- 00:23:58.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.639 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:23:58.639 02:43:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.639 02:43:31 -- nvmf/common.sh@411 -- # return 0 00:23:58.639 02:43:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:58.639 02:43:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.639 02:43:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.639 02:43:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:58.639 02:43:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:58.639 02:43:31 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:58.639 02:43:31 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:58.639 02:43:31 -- nvmf/common.sh@717 -- # local ip 00:23:58.639 02:43:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.639 02:43:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.639 02:43:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.639 02:43:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.639 02:43:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.639 02:43:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.639 02:43:31 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:58.639 02:43:31 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:58.639 02:43:31 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:58.639 02:43:31 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:58.639 02:43:31 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:58.639 02:43:31 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:58.639 02:43:31 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:58.639 02:43:31 -- nvmf/common.sh@628 -- # local block nvme 00:23:58.639 02:43:31 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:58.639 02:43:31 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:58.639 02:43:31 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:01.188 Waiting for block devices as requested 00:24:01.188 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:01.188 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:01.449 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:01.449 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:01.449 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:01.710 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:01.710 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:01.710 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:01.710 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:01.972 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:01.972 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:01.972 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:02.234 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:02.234 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:02.234 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:02.495 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:02.495 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:02.495 02:43:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:02.495 02:43:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:02.495 02:43:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:02.495 02:43:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:02.495 02:43:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:02.495 02:43:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:02.495 02:43:35 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:02.495 02:43:35 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:02.495 02:43:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:02.495 No valid GPT data, bailing 00:24:02.495 02:43:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:02.496 02:43:36 -- scripts/common.sh@391 -- # pt= 00:24:02.496 02:43:36 -- scripts/common.sh@392 -- # return 1 00:24:02.496 02:43:36 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:02.496 02:43:36 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:02.496 02:43:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:02.496 02:43:36 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:02.496 02:43:36 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:02.496 02:43:36 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:02.496 02:43:36 -- nvmf/common.sh@656 -- # echo 1 00:24:02.496 02:43:36 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:02.496 02:43:36 -- nvmf/common.sh@658 -- # echo 1 00:24:02.496 02:43:36 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:02.496 02:43:36 -- nvmf/common.sh@661 -- # echo tcp 00:24:02.496 02:43:36 -- nvmf/common.sh@662 -- # echo 4420 00:24:02.496 02:43:36 -- nvmf/common.sh@663 -- # echo ipv4 00:24:02.496 02:43:36 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:02.496 02:43:36 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:24:02.496 00:24:02.496 Discovery Log Number of Records 2, Generation counter 2 00:24:02.496 =====Discovery Log Entry 0====== 00:24:02.496 trtype: tcp 00:24:02.496 adrfam: ipv4 00:24:02.496 subtype: current discovery subsystem 00:24:02.496 treq: not specified, sq flow control disable supported 00:24:02.496 portid: 1 00:24:02.496 trsvcid: 4420 00:24:02.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:02.496 traddr: 10.0.0.1 00:24:02.496 eflags: none 00:24:02.496 sectype: none 00:24:02.496 =====Discovery Log Entry 1====== 00:24:02.496 trtype: tcp 00:24:02.496 adrfam: ipv4 00:24:02.496 subtype: nvme subsystem 00:24:02.496 treq: not specified, sq flow control disable supported 00:24:02.496 portid: 1 00:24:02.496 trsvcid: 4420 00:24:02.496 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:02.496 traddr: 10.0.0.1 00:24:02.496 eflags: none 00:24:02.496 sectype: none 00:24:02.496 02:43:36 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:02.496 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:02.762 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.762 ===================================================== 00:24:02.762 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:02.762 ===================================================== 00:24:02.762 Controller Capabilities/Features 00:24:02.762 ================================ 00:24:02.762 Vendor ID: 0000 00:24:02.762 Subsystem Vendor ID: 0000 00:24:02.762 Serial Number: 8b035f6fe0c5d144ca0c 00:24:02.762 Model Number: Linux 00:24:02.762 Firmware Version: 6.7.0-68 00:24:02.762 Recommended Arb Burst: 0 00:24:02.762 IEEE OUI Identifier: 00 00 00 00:24:02.762 Multi-path I/O 00:24:02.762 May have multiple subsystem ports: No 00:24:02.762 May have multiple controllers: No 00:24:02.762 Associated with SR-IOV VF: No 00:24:02.762 Max Data Transfer Size: Unlimited 00:24:02.762 Max Number of Namespaces: 0 00:24:02.762 Max Number of I/O Queues: 1024 00:24:02.762 NVMe Specification Version (VS): 1.3 00:24:02.762 NVMe Specification Version (Identify): 1.3 00:24:02.762 Maximum Queue Entries: 1024 00:24:02.762 Contiguous Queues Required: No 00:24:02.762 Arbitration Mechanisms Supported 00:24:02.762 Weighted Round Robin: Not Supported 00:24:02.762 Vendor Specific: Not Supported 00:24:02.762 Reset Timeout: 7500 ms 00:24:02.762 Doorbell Stride: 4 bytes 00:24:02.762 NVM Subsystem Reset: Not Supported 00:24:02.762 Command Sets Supported 00:24:02.762 NVM Command Set: Supported 00:24:02.762 Boot Partition: Not Supported 00:24:02.762 Memory Page Size Minimum: 4096 bytes 00:24:02.762 Memory Page Size Maximum: 4096 bytes 00:24:02.762 Persistent Memory Region: Not Supported 00:24:02.762 Optional Asynchronous Events Supported 00:24:02.762 Namespace Attribute Notices: Not Supported 00:24:02.762 Firmware Activation Notices: Not Supported 00:24:02.762 ANA Change Notices: Not Supported 00:24:02.762 PLE Aggregate Log Change Notices: Not Supported 00:24:02.762 LBA Status Info Alert Notices: Not Supported 00:24:02.762 EGE Aggregate Log Change Notices: Not Supported 00:24:02.762 Normal NVM Subsystem Shutdown event: Not Supported 00:24:02.762 Zone Descriptor Change Notices: Not Supported 00:24:02.762 Discovery Log Change Notices: Supported 00:24:02.762 Controller Attributes 00:24:02.762 128-bit Host Identifier: Not Supported 00:24:02.762 Non-Operational Permissive Mode: Not Supported 00:24:02.762 NVM Sets: Not Supported 00:24:02.762 Read Recovery Levels: Not Supported 00:24:02.762 Endurance Groups: Not Supported 00:24:02.762 Predictable Latency Mode: Not Supported 00:24:02.762 Traffic Based Keep ALive: Not Supported 00:24:02.762 Namespace Granularity: Not Supported 00:24:02.762 SQ Associations: Not Supported 00:24:02.762 UUID List: Not Supported 00:24:02.762 Multi-Domain Subsystem: Not Supported 00:24:02.762 Fixed Capacity Management: Not Supported 00:24:02.762 Variable Capacity Management: Not Supported 00:24:02.762 Delete Endurance Group: Not Supported 00:24:02.762 Delete NVM Set: Not Supported 00:24:02.762 Extended LBA Formats Supported: Not Supported 00:24:02.762 Flexible Data Placement Supported: Not Supported 00:24:02.762 00:24:02.762 Controller Memory Buffer Support 00:24:02.762 ================================ 00:24:02.762 Supported: No 00:24:02.762 00:24:02.762 Persistent Memory Region Support 00:24:02.762 ================================ 00:24:02.762 Supported: No 00:24:02.762 00:24:02.762 Admin Command Set Attributes 00:24:02.762 ============================ 00:24:02.762 Security Send/Receive: Not Supported 00:24:02.762 Format NVM: Not Supported 00:24:02.762 Firmware Activate/Download: Not Supported 00:24:02.762 Namespace Management: Not Supported 00:24:02.762 Device Self-Test: Not Supported 00:24:02.762 Directives: Not Supported 00:24:02.762 NVMe-MI: Not Supported 00:24:02.762 Virtualization Management: Not Supported 00:24:02.762 Doorbell Buffer Config: Not Supported 00:24:02.762 Get LBA Status Capability: Not Supported 00:24:02.762 Command & Feature Lockdown Capability: Not Supported 00:24:02.762 Abort Command Limit: 1 00:24:02.762 Async Event Request Limit: 1 00:24:02.762 Number of Firmware Slots: N/A 00:24:02.762 Firmware Slot 1 Read-Only: N/A 00:24:02.762 Firmware Activation Without Reset: N/A 00:24:02.762 Multiple Update Detection Support: N/A 00:24:02.762 Firmware Update Granularity: No Information Provided 00:24:02.762 Per-Namespace SMART Log: No 00:24:02.762 Asymmetric Namespace Access Log Page: Not Supported 00:24:02.762 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:02.762 Command Effects Log Page: Not Supported 00:24:02.762 Get Log Page Extended Data: Supported 00:24:02.762 Telemetry Log Pages: Not Supported 00:24:02.762 Persistent Event Log Pages: Not Supported 00:24:02.762 Supported Log Pages Log Page: May Support 00:24:02.762 Commands Supported & Effects Log Page: Not Supported 00:24:02.762 Feature Identifiers & Effects Log Page:May Support 00:24:02.762 NVMe-MI Commands & Effects Log Page: May Support 00:24:02.762 Data Area 4 for Telemetry Log: Not Supported 00:24:02.762 Error Log Page Entries Supported: 1 00:24:02.762 Keep Alive: Not Supported 00:24:02.762 00:24:02.762 NVM Command Set Attributes 00:24:02.762 ========================== 00:24:02.762 Submission Queue Entry Size 00:24:02.762 Max: 1 00:24:02.762 Min: 1 00:24:02.762 Completion Queue Entry Size 00:24:02.762 Max: 1 00:24:02.762 Min: 1 00:24:02.762 Number of Namespaces: 0 00:24:02.762 Compare Command: Not Supported 00:24:02.762 Write Uncorrectable Command: Not Supported 00:24:02.762 Dataset Management Command: Not Supported 00:24:02.762 Write Zeroes Command: Not Supported 00:24:02.762 Set Features Save Field: Not Supported 00:24:02.762 Reservations: Not Supported 00:24:02.762 Timestamp: Not Supported 00:24:02.762 Copy: Not Supported 00:24:02.762 Volatile Write Cache: Not Present 00:24:02.762 Atomic Write Unit (Normal): 1 00:24:02.762 Atomic Write Unit (PFail): 1 00:24:02.762 Atomic Compare & Write Unit: 1 00:24:02.762 Fused Compare & Write: Not Supported 00:24:02.762 Scatter-Gather List 00:24:02.763 SGL Command Set: Supported 00:24:02.763 SGL Keyed: Not Supported 00:24:02.763 SGL Bit Bucket Descriptor: Not Supported 00:24:02.763 SGL Metadata Pointer: Not Supported 00:24:02.763 Oversized SGL: Not Supported 00:24:02.763 SGL Metadata Address: Not Supported 00:24:02.763 SGL Offset: Supported 00:24:02.763 Transport SGL Data Block: Not Supported 00:24:02.763 Replay Protected Memory Block: Not Supported 00:24:02.763 00:24:02.763 Firmware Slot Information 00:24:02.763 ========================= 00:24:02.763 Active slot: 0 00:24:02.763 00:24:02.763 00:24:02.763 Error Log 00:24:02.763 ========= 00:24:02.763 00:24:02.763 Active Namespaces 00:24:02.763 ================= 00:24:02.763 Discovery Log Page 00:24:02.763 ================== 00:24:02.763 Generation Counter: 2 00:24:02.763 Number of Records: 2 00:24:02.763 Record Format: 0 00:24:02.763 00:24:02.763 Discovery Log Entry 0 00:24:02.763 ---------------------- 00:24:02.763 Transport Type: 3 (TCP) 00:24:02.763 Address Family: 1 (IPv4) 00:24:02.763 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:02.763 Entry Flags: 00:24:02.763 Duplicate Returned Information: 0 00:24:02.763 Explicit Persistent Connection Support for Discovery: 0 00:24:02.763 Transport Requirements: 00:24:02.763 Secure Channel: Not Specified 00:24:02.763 Port ID: 1 (0x0001) 00:24:02.763 Controller ID: 65535 (0xffff) 00:24:02.763 Admin Max SQ Size: 32 00:24:02.763 Transport Service Identifier: 4420 00:24:02.763 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:02.763 Transport Address: 10.0.0.1 00:24:02.763 Discovery Log Entry 1 00:24:02.763 ---------------------- 00:24:02.763 Transport Type: 3 (TCP) 00:24:02.763 Address Family: 1 (IPv4) 00:24:02.763 Subsystem Type: 2 (NVM Subsystem) 00:24:02.763 Entry Flags: 00:24:02.763 Duplicate Returned Information: 0 00:24:02.763 Explicit Persistent Connection Support for Discovery: 0 00:24:02.763 Transport Requirements: 00:24:02.763 Secure Channel: Not Specified 00:24:02.763 Port ID: 1 (0x0001) 00:24:02.763 Controller ID: 65535 (0xffff) 00:24:02.763 Admin Max SQ Size: 32 00:24:02.763 Transport Service Identifier: 4420 00:24:02.763 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:02.763 Transport Address: 10.0.0.1 00:24:02.763 02:43:36 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:02.763 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.763 get_feature(0x01) failed 00:24:02.763 get_feature(0x02) failed 00:24:02.763 get_feature(0x04) failed 00:24:02.763 ===================================================== 00:24:02.763 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:02.763 ===================================================== 00:24:02.763 Controller Capabilities/Features 00:24:02.763 ================================ 00:24:02.763 Vendor ID: 0000 00:24:02.763 Subsystem Vendor ID: 0000 00:24:02.763 Serial Number: 34a72eb59199f8d4cc7b 00:24:02.763 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:02.763 Firmware Version: 6.7.0-68 00:24:02.763 Recommended Arb Burst: 6 00:24:02.763 IEEE OUI Identifier: 00 00 00 00:24:02.763 Multi-path I/O 00:24:02.763 May have multiple subsystem ports: Yes 00:24:02.763 May have multiple controllers: Yes 00:24:02.763 Associated with SR-IOV VF: No 00:24:02.763 Max Data Transfer Size: Unlimited 00:24:02.763 Max Number of Namespaces: 1024 00:24:02.763 Max Number of I/O Queues: 128 00:24:02.763 NVMe Specification Version (VS): 1.3 00:24:02.763 NVMe Specification Version (Identify): 1.3 00:24:02.763 Maximum Queue Entries: 1024 00:24:02.763 Contiguous Queues Required: No 00:24:02.763 Arbitration Mechanisms Supported 00:24:02.763 Weighted Round Robin: Not Supported 00:24:02.763 Vendor Specific: Not Supported 00:24:02.763 Reset Timeout: 7500 ms 00:24:02.763 Doorbell Stride: 4 bytes 00:24:02.763 NVM Subsystem Reset: Not Supported 00:24:02.763 Command Sets Supported 00:24:02.763 NVM Command Set: Supported 00:24:02.763 Boot Partition: Not Supported 00:24:02.763 Memory Page Size Minimum: 4096 bytes 00:24:02.763 Memory Page Size Maximum: 4096 bytes 00:24:02.763 Persistent Memory Region: Not Supported 00:24:02.763 Optional Asynchronous Events Supported 00:24:02.763 Namespace Attribute Notices: Supported 00:24:02.763 Firmware Activation Notices: Not Supported 00:24:02.763 ANA Change Notices: Supported 00:24:02.763 PLE Aggregate Log Change Notices: Not Supported 00:24:02.763 LBA Status Info Alert Notices: Not Supported 00:24:02.763 EGE Aggregate Log Change Notices: Not Supported 00:24:02.763 Normal NVM Subsystem Shutdown event: Not Supported 00:24:02.763 Zone Descriptor Change Notices: Not Supported 00:24:02.763 Discovery Log Change Notices: Not Supported 00:24:02.763 Controller Attributes 00:24:02.763 128-bit Host Identifier: Supported 00:24:02.763 Non-Operational Permissive Mode: Not Supported 00:24:02.763 NVM Sets: Not Supported 00:24:02.763 Read Recovery Levels: Not Supported 00:24:02.763 Endurance Groups: Not Supported 00:24:02.763 Predictable Latency Mode: Not Supported 00:24:02.763 Traffic Based Keep ALive: Supported 00:24:02.763 Namespace Granularity: Not Supported 00:24:02.763 SQ Associations: Not Supported 00:24:02.763 UUID List: Not Supported 00:24:02.763 Multi-Domain Subsystem: Not Supported 00:24:02.763 Fixed Capacity Management: Not Supported 00:24:02.763 Variable Capacity Management: Not Supported 00:24:02.763 Delete Endurance Group: Not Supported 00:24:02.763 Delete NVM Set: Not Supported 00:24:02.763 Extended LBA Formats Supported: Not Supported 00:24:02.763 Flexible Data Placement Supported: Not Supported 00:24:02.763 00:24:02.763 Controller Memory Buffer Support 00:24:02.763 ================================ 00:24:02.763 Supported: No 00:24:02.763 00:24:02.763 Persistent Memory Region Support 00:24:02.763 ================================ 00:24:02.763 Supported: No 00:24:02.763 00:24:02.763 Admin Command Set Attributes 00:24:02.763 ============================ 00:24:02.763 Security Send/Receive: Not Supported 00:24:02.763 Format NVM: Not Supported 00:24:02.763 Firmware Activate/Download: Not Supported 00:24:02.763 Namespace Management: Not Supported 00:24:02.763 Device Self-Test: Not Supported 00:24:02.763 Directives: Not Supported 00:24:02.763 NVMe-MI: Not Supported 00:24:02.763 Virtualization Management: Not Supported 00:24:02.763 Doorbell Buffer Config: Not Supported 00:24:02.763 Get LBA Status Capability: Not Supported 00:24:02.763 Command & Feature Lockdown Capability: Not Supported 00:24:02.763 Abort Command Limit: 4 00:24:02.763 Async Event Request Limit: 4 00:24:02.763 Number of Firmware Slots: N/A 00:24:02.763 Firmware Slot 1 Read-Only: N/A 00:24:02.763 Firmware Activation Without Reset: N/A 00:24:02.763 Multiple Update Detection Support: N/A 00:24:02.763 Firmware Update Granularity: No Information Provided 00:24:02.763 Per-Namespace SMART Log: Yes 00:24:02.763 Asymmetric Namespace Access Log Page: Supported 00:24:02.763 ANA Transition Time : 10 sec 00:24:02.763 00:24:02.763 Asymmetric Namespace Access Capabilities 00:24:02.763 ANA Optimized State : Supported 00:24:02.763 ANA Non-Optimized State : Supported 00:24:02.763 ANA Inaccessible State : Supported 00:24:02.763 ANA Persistent Loss State : Supported 00:24:02.763 ANA Change State : Supported 00:24:02.763 ANAGRPID is not changed : No 00:24:02.763 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:02.763 00:24:02.763 ANA Group Identifier Maximum : 128 00:24:02.763 Number of ANA Group Identifiers : 128 00:24:02.763 Max Number of Allowed Namespaces : 1024 00:24:02.763 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:02.763 Command Effects Log Page: Supported 00:24:02.763 Get Log Page Extended Data: Supported 00:24:02.763 Telemetry Log Pages: Not Supported 00:24:02.763 Persistent Event Log Pages: Not Supported 00:24:02.763 Supported Log Pages Log Page: May Support 00:24:02.763 Commands Supported & Effects Log Page: Not Supported 00:24:02.763 Feature Identifiers & Effects Log Page:May Support 00:24:02.763 NVMe-MI Commands & Effects Log Page: May Support 00:24:02.763 Data Area 4 for Telemetry Log: Not Supported 00:24:02.763 Error Log Page Entries Supported: 128 00:24:02.763 Keep Alive: Supported 00:24:02.763 Keep Alive Granularity: 1000 ms 00:24:02.763 00:24:02.763 NVM Command Set Attributes 00:24:02.763 ========================== 00:24:02.763 Submission Queue Entry Size 00:24:02.763 Max: 64 00:24:02.763 Min: 64 00:24:02.763 Completion Queue Entry Size 00:24:02.763 Max: 16 00:24:02.763 Min: 16 00:24:02.763 Number of Namespaces: 1024 00:24:02.763 Compare Command: Not Supported 00:24:02.763 Write Uncorrectable Command: Not Supported 00:24:02.763 Dataset Management Command: Supported 00:24:02.763 Write Zeroes Command: Supported 00:24:02.763 Set Features Save Field: Not Supported 00:24:02.763 Reservations: Not Supported 00:24:02.764 Timestamp: Not Supported 00:24:02.764 Copy: Not Supported 00:24:02.764 Volatile Write Cache: Present 00:24:02.764 Atomic Write Unit (Normal): 1 00:24:02.764 Atomic Write Unit (PFail): 1 00:24:02.764 Atomic Compare & Write Unit: 1 00:24:02.764 Fused Compare & Write: Not Supported 00:24:02.764 Scatter-Gather List 00:24:02.764 SGL Command Set: Supported 00:24:02.764 SGL Keyed: Not Supported 00:24:02.764 SGL Bit Bucket Descriptor: Not Supported 00:24:02.764 SGL Metadata Pointer: Not Supported 00:24:02.764 Oversized SGL: Not Supported 00:24:02.764 SGL Metadata Address: Not Supported 00:24:02.764 SGL Offset: Supported 00:24:02.764 Transport SGL Data Block: Not Supported 00:24:02.764 Replay Protected Memory Block: Not Supported 00:24:02.764 00:24:02.764 Firmware Slot Information 00:24:02.764 ========================= 00:24:02.764 Active slot: 0 00:24:02.764 00:24:02.764 Asymmetric Namespace Access 00:24:02.764 =========================== 00:24:02.764 Change Count : 0 00:24:02.764 Number of ANA Group Descriptors : 1 00:24:02.764 ANA Group Descriptor : 0 00:24:02.764 ANA Group ID : 1 00:24:02.764 Number of NSID Values : 1 00:24:02.764 Change Count : 0 00:24:02.764 ANA State : 1 00:24:02.764 Namespace Identifier : 1 00:24:02.764 00:24:02.764 Commands Supported and Effects 00:24:02.764 ============================== 00:24:02.764 Admin Commands 00:24:02.764 -------------- 00:24:02.764 Get Log Page (02h): Supported 00:24:02.764 Identify (06h): Supported 00:24:02.764 Abort (08h): Supported 00:24:02.764 Set Features (09h): Supported 00:24:02.764 Get Features (0Ah): Supported 00:24:02.764 Asynchronous Event Request (0Ch): Supported 00:24:02.764 Keep Alive (18h): Supported 00:24:02.764 I/O Commands 00:24:02.764 ------------ 00:24:02.764 Flush (00h): Supported 00:24:02.764 Write (01h): Supported LBA-Change 00:24:02.764 Read (02h): Supported 00:24:02.764 Write Zeroes (08h): Supported LBA-Change 00:24:02.764 Dataset Management (09h): Supported 00:24:02.764 00:24:02.764 Error Log 00:24:02.764 ========= 00:24:02.764 Entry: 0 00:24:02.764 Error Count: 0x3 00:24:02.764 Submission Queue Id: 0x0 00:24:02.764 Command Id: 0x5 00:24:02.764 Phase Bit: 0 00:24:02.764 Status Code: 0x2 00:24:02.764 Status Code Type: 0x0 00:24:02.764 Do Not Retry: 1 00:24:02.764 Error Location: 0x28 00:24:02.764 LBA: 0x0 00:24:02.764 Namespace: 0x0 00:24:02.764 Vendor Log Page: 0x0 00:24:02.764 ----------- 00:24:02.764 Entry: 1 00:24:02.764 Error Count: 0x2 00:24:02.764 Submission Queue Id: 0x0 00:24:02.764 Command Id: 0x5 00:24:02.764 Phase Bit: 0 00:24:02.764 Status Code: 0x2 00:24:02.764 Status Code Type: 0x0 00:24:02.764 Do Not Retry: 1 00:24:02.764 Error Location: 0x28 00:24:02.764 LBA: 0x0 00:24:02.764 Namespace: 0x0 00:24:02.764 Vendor Log Page: 0x0 00:24:02.764 ----------- 00:24:02.764 Entry: 2 00:24:02.764 Error Count: 0x1 00:24:02.764 Submission Queue Id: 0x0 00:24:02.764 Command Id: 0x4 00:24:02.764 Phase Bit: 0 00:24:02.764 Status Code: 0x2 00:24:02.764 Status Code Type: 0x0 00:24:02.764 Do Not Retry: 1 00:24:02.764 Error Location: 0x28 00:24:02.764 LBA: 0x0 00:24:02.764 Namespace: 0x0 00:24:02.764 Vendor Log Page: 0x0 00:24:02.764 00:24:02.764 Number of Queues 00:24:02.764 ================ 00:24:02.764 Number of I/O Submission Queues: 128 00:24:02.764 Number of I/O Completion Queues: 128 00:24:02.764 00:24:02.764 ZNS Specific Controller Data 00:24:02.764 ============================ 00:24:02.764 Zone Append Size Limit: 0 00:24:02.764 00:24:02.764 00:24:02.764 Active Namespaces 00:24:02.764 ================= 00:24:02.764 get_feature(0x05) failed 00:24:02.764 Namespace ID:1 00:24:02.764 Command Set Identifier: NVM (00h) 00:24:02.764 Deallocate: Supported 00:24:02.764 Deallocated/Unwritten Error: Not Supported 00:24:02.764 Deallocated Read Value: Unknown 00:24:02.764 Deallocate in Write Zeroes: Not Supported 00:24:02.764 Deallocated Guard Field: 0xFFFF 00:24:02.764 Flush: Supported 00:24:02.764 Reservation: Not Supported 00:24:02.764 Namespace Sharing Capabilities: Multiple Controllers 00:24:02.764 Size (in LBAs): 3750748848 (1788GiB) 00:24:02.764 Capacity (in LBAs): 3750748848 (1788GiB) 00:24:02.764 Utilization (in LBAs): 3750748848 (1788GiB) 00:24:02.764 UUID: 4835fc62-00e0-48a7-a48c-5941f5c7d6d0 00:24:02.764 Thin Provisioning: Not Supported 00:24:02.764 Per-NS Atomic Units: Yes 00:24:02.764 Atomic Write Unit (Normal): 8 00:24:02.764 Atomic Write Unit (PFail): 8 00:24:02.764 Preferred Write Granularity: 8 00:24:02.764 Atomic Compare & Write Unit: 8 00:24:02.764 Atomic Boundary Size (Normal): 0 00:24:02.764 Atomic Boundary Size (PFail): 0 00:24:02.764 Atomic Boundary Offset: 0 00:24:02.764 NGUID/EUI64 Never Reused: No 00:24:02.764 ANA group ID: 1 00:24:02.764 Namespace Write Protected: No 00:24:02.764 Number of LBA Formats: 1 00:24:02.764 Current LBA Format: LBA Format #00 00:24:02.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:02.764 00:24:02.764 02:43:36 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:02.764 02:43:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:02.764 02:43:36 -- nvmf/common.sh@117 -- # sync 00:24:02.764 02:43:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.764 02:43:36 -- nvmf/common.sh@120 -- # set +e 00:24:02.764 02:43:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.764 02:43:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.764 rmmod nvme_tcp 00:24:02.764 rmmod nvme_fabrics 00:24:02.764 02:43:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.764 02:43:36 -- nvmf/common.sh@124 -- # set -e 00:24:02.764 02:43:36 -- nvmf/common.sh@125 -- # return 0 00:24:02.764 02:43:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:02.764 02:43:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:02.764 02:43:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:02.764 02:43:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:02.764 02:43:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.764 02:43:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.764 02:43:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.764 02:43:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.764 02:43:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.775 02:43:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.775 02:43:38 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:04.775 02:43:38 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:04.775 02:43:38 -- nvmf/common.sh@675 -- # echo 0 00:24:04.775 02:43:38 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:04.775 02:43:38 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:04.775 02:43:38 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:04.775 02:43:38 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:04.775 02:43:38 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:04.775 02:43:38 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:04.775 02:43:38 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:08.987 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:08.987 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:10.373 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:10.373 00:24:10.373 real 0m19.804s 00:24:10.373 user 0m4.789s 00:24:10.373 sys 0m10.375s 00:24:10.373 02:43:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:10.373 02:43:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.373 ************************************ 00:24:10.373 END TEST nvmf_identify_kernel_target 00:24:10.373 ************************************ 00:24:10.373 02:43:43 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:10.373 02:43:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:10.373 02:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.373 02:43:43 -- common/autotest_common.sh@10 -- # set +x 00:24:10.373 ************************************ 00:24:10.373 START TEST nvmf_auth 00:24:10.373 ************************************ 00:24:10.373 02:43:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:10.635 * Looking for test storage... 00:24:10.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.635 02:43:44 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.635 02:43:44 -- nvmf/common.sh@7 -- # uname -s 00:24:10.635 02:43:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.635 02:43:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.635 02:43:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.635 02:43:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.635 02:43:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.635 02:43:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.635 02:43:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.635 02:43:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.635 02:43:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.635 02:43:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.635 02:43:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.635 02:43:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.635 02:43:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.635 02:43:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.635 02:43:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.635 02:43:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.635 02:43:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.635 02:43:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.635 02:43:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.635 02:43:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.635 02:43:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.635 02:43:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.635 02:43:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.635 02:43:44 -- paths/export.sh@5 -- # export PATH 00:24:10.635 02:43:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.635 02:43:44 -- nvmf/common.sh@47 -- # : 0 00:24:10.635 02:43:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.635 02:43:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.635 02:43:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.635 02:43:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.635 02:43:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.635 02:43:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.635 02:43:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.635 02:43:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.635 02:43:44 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:10.635 02:43:44 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:10.635 02:43:44 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:10.635 02:43:44 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:10.635 02:43:44 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:10.635 02:43:44 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:10.635 02:43:44 -- host/auth.sh@21 -- # keys=() 00:24:10.635 02:43:44 -- host/auth.sh@77 -- # nvmftestinit 00:24:10.635 02:43:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:10.635 02:43:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.635 02:43:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:10.635 02:43:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:10.635 02:43:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:10.635 02:43:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.635 02:43:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.635 02:43:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.635 02:43:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:10.635 02:43:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:10.635 02:43:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.635 02:43:44 -- common/autotest_common.sh@10 -- # set +x 00:24:17.228 02:43:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:17.228 02:43:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.228 02:43:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.228 02:43:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.228 02:43:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.228 02:43:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.228 02:43:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.228 02:43:50 -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.228 02:43:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.228 02:43:50 -- nvmf/common.sh@296 -- # e810=() 00:24:17.228 02:43:50 -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.228 02:43:50 -- nvmf/common.sh@297 -- # x722=() 00:24:17.228 02:43:50 -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.228 02:43:50 -- nvmf/common.sh@298 -- # mlx=() 00:24:17.228 02:43:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.228 02:43:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.228 02:43:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.228 02:43:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.228 02:43:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.228 02:43:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.228 02:43:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.228 02:43:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.228 02:43:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.228 02:43:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.228 02:43:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.229 02:43:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.229 02:43:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.229 02:43:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:17.229 02:43:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.229 02:43:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.229 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.229 02:43:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.229 02:43:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.229 02:43:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.229 02:43:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:17.229 02:43:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.229 02:43:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.229 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.229 02:43:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.229 02:43:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:17.229 02:43:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:17.229 02:43:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:17.229 02:43:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.229 02:43:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.229 02:43:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.229 02:43:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.229 02:43:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.229 02:43:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.229 02:43:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.229 02:43:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.229 02:43:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.229 02:43:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.229 02:43:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.229 02:43:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.229 02:43:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.229 02:43:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.229 02:43:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.229 02:43:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.229 02:43:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.229 02:43:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.229 02:43:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.229 02:43:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:24:17.229 00:24:17.229 --- 10.0.0.2 ping statistics --- 00:24:17.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.229 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:24:17.229 02:43:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:24:17.229 00:24:17.229 --- 10.0.0.1 ping statistics --- 00:24:17.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.229 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:24:17.229 02:43:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.229 02:43:50 -- nvmf/common.sh@411 -- # return 0 00:24:17.229 02:43:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:17.229 02:43:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.229 02:43:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:17.229 02:43:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.229 02:43:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:17.229 02:43:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:17.229 02:43:50 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:17.229 02:43:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:17.229 02:43:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:17.229 02:43:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.229 02:43:50 -- nvmf/common.sh@470 -- # nvmfpid=246659 00:24:17.229 02:43:50 -- nvmf/common.sh@471 -- # waitforlisten 246659 00:24:17.229 02:43:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:17.229 02:43:50 -- common/autotest_common.sh@817 -- # '[' -z 246659 ']' 00:24:17.229 02:43:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.229 02:43:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:17.229 02:43:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.229 02:43:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:17.229 02:43:50 -- common/autotest_common.sh@10 -- # set +x 00:24:18.170 02:43:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:18.170 02:43:51 -- common/autotest_common.sh@850 -- # return 0 00:24:18.170 02:43:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:18.170 02:43:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:18.170 02:43:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.170 02:43:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.170 02:43:51 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:18.170 02:43:51 -- host/auth.sh@81 -- # gen_key null 32 00:24:18.170 02:43:51 -- host/auth.sh@53 -- # local digest len file key 00:24:18.170 02:43:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.170 02:43:51 -- host/auth.sh@54 -- # local -A digests 00:24:18.170 02:43:51 -- host/auth.sh@56 -- # digest=null 00:24:18.170 02:43:51 -- host/auth.sh@56 -- # len=32 00:24:18.170 02:43:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:18.170 02:43:51 -- host/auth.sh@57 -- # key=fe993f6dd2d14578ff65973f7141d6f2 00:24:18.170 02:43:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:18.170 02:43:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.oBl 00:24:18.170 02:43:51 -- host/auth.sh@59 -- # format_dhchap_key fe993f6dd2d14578ff65973f7141d6f2 0 00:24:18.170 02:43:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 fe993f6dd2d14578ff65973f7141d6f2 0 00:24:18.170 02:43:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:18.170 02:43:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:18.170 02:43:51 -- nvmf/common.sh@693 -- # key=fe993f6dd2d14578ff65973f7141d6f2 00:24:18.170 02:43:51 -- nvmf/common.sh@693 -- # digest=0 00:24:18.170 02:43:51 -- nvmf/common.sh@694 -- # python - 00:24:18.170 02:43:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.oBl 00:24:18.170 02:43:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.oBl 00:24:18.170 02:43:51 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.oBl 00:24:18.170 02:43:51 -- host/auth.sh@82 -- # gen_key null 48 00:24:18.170 02:43:51 -- host/auth.sh@53 -- # local digest len file key 00:24:18.170 02:43:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.170 02:43:51 -- host/auth.sh@54 -- # local -A digests 00:24:18.170 02:43:51 -- host/auth.sh@56 -- # digest=null 00:24:18.170 02:43:51 -- host/auth.sh@56 -- # len=48 00:24:18.170 02:43:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:18.170 02:43:51 -- host/auth.sh@57 -- # key=d7c8775715f5b83b511c919f721198c12dde2477b562037b 00:24:18.170 02:43:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:18.170 02:43:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.3y7 00:24:18.170 02:43:51 -- host/auth.sh@59 -- # format_dhchap_key d7c8775715f5b83b511c919f721198c12dde2477b562037b 0 00:24:18.170 02:43:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 d7c8775715f5b83b511c919f721198c12dde2477b562037b 0 00:24:18.170 02:43:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:18.170 02:43:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:18.170 02:43:51 -- nvmf/common.sh@693 -- # key=d7c8775715f5b83b511c919f721198c12dde2477b562037b 00:24:18.170 02:43:51 -- nvmf/common.sh@693 -- # digest=0 00:24:18.170 02:43:51 -- nvmf/common.sh@694 -- # python - 00:24:18.429 02:43:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.3y7 00:24:18.429 02:43:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.3y7 00:24:18.429 02:43:51 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.3y7 00:24:18.429 02:43:51 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:18.429 02:43:51 -- host/auth.sh@53 -- # local digest len file key 00:24:18.429 02:43:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.429 02:43:51 -- host/auth.sh@54 -- # local -A digests 00:24:18.429 02:43:51 -- host/auth.sh@56 -- # digest=sha256 00:24:18.429 02:43:51 -- host/auth.sh@56 -- # len=32 00:24:18.429 02:43:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:18.429 02:43:51 -- host/auth.sh@57 -- # key=8ba5e123b381fae045c69b27ac0db85c 00:24:18.429 02:43:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:18.429 02:43:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.iod 00:24:18.429 02:43:51 -- host/auth.sh@59 -- # format_dhchap_key 8ba5e123b381fae045c69b27ac0db85c 1 00:24:18.429 02:43:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 8ba5e123b381fae045c69b27ac0db85c 1 00:24:18.429 02:43:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # key=8ba5e123b381fae045c69b27ac0db85c 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # digest=1 00:24:18.429 02:43:51 -- nvmf/common.sh@694 -- # python - 00:24:18.429 02:43:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.iod 00:24:18.429 02:43:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.iod 00:24:18.429 02:43:51 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.iod 00:24:18.429 02:43:51 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:18.429 02:43:51 -- host/auth.sh@53 -- # local digest len file key 00:24:18.429 02:43:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.429 02:43:51 -- host/auth.sh@54 -- # local -A digests 00:24:18.429 02:43:51 -- host/auth.sh@56 -- # digest=sha384 00:24:18.429 02:43:51 -- host/auth.sh@56 -- # len=48 00:24:18.429 02:43:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:18.429 02:43:51 -- host/auth.sh@57 -- # key=e6374e37c142f78e6ebd697721aef0e386ef229b48b242f2 00:24:18.429 02:43:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:18.429 02:43:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.4vZ 00:24:18.429 02:43:51 -- host/auth.sh@59 -- # format_dhchap_key e6374e37c142f78e6ebd697721aef0e386ef229b48b242f2 2 00:24:18.429 02:43:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 e6374e37c142f78e6ebd697721aef0e386ef229b48b242f2 2 00:24:18.429 02:43:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # key=e6374e37c142f78e6ebd697721aef0e386ef229b48b242f2 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # digest=2 00:24:18.429 02:43:51 -- nvmf/common.sh@694 -- # python - 00:24:18.429 02:43:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.4vZ 00:24:18.429 02:43:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.4vZ 00:24:18.429 02:43:51 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.4vZ 00:24:18.429 02:43:51 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:18.429 02:43:51 -- host/auth.sh@53 -- # local digest len file key 00:24:18.429 02:43:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:18.429 02:43:51 -- host/auth.sh@54 -- # local -A digests 00:24:18.429 02:43:51 -- host/auth.sh@56 -- # digest=sha512 00:24:18.429 02:43:51 -- host/auth.sh@56 -- # len=64 00:24:18.429 02:43:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:18.429 02:43:51 -- host/auth.sh@57 -- # key=5f55803589a1a460e4a87dad63896138f60ca04f832e4886a44abebfb25b7958 00:24:18.429 02:43:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:18.429 02:43:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.yLs 00:24:18.429 02:43:51 -- host/auth.sh@59 -- # format_dhchap_key 5f55803589a1a460e4a87dad63896138f60ca04f832e4886a44abebfb25b7958 3 00:24:18.429 02:43:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 5f55803589a1a460e4a87dad63896138f60ca04f832e4886a44abebfb25b7958 3 00:24:18.429 02:43:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # key=5f55803589a1a460e4a87dad63896138f60ca04f832e4886a44abebfb25b7958 00:24:18.429 02:43:51 -- nvmf/common.sh@693 -- # digest=3 00:24:18.429 02:43:51 -- nvmf/common.sh@694 -- # python - 00:24:18.429 02:43:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.yLs 00:24:18.429 02:43:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.yLs 00:24:18.429 02:43:52 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.yLs 00:24:18.429 02:43:52 -- host/auth.sh@87 -- # waitforlisten 246659 00:24:18.429 02:43:52 -- common/autotest_common.sh@817 -- # '[' -z 246659 ']' 00:24:18.430 02:43:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.430 02:43:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:18.430 02:43:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.430 02:43:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:18.430 02:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.690 02:43:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:18.690 02:43:52 -- common/autotest_common.sh@850 -- # return 0 00:24:18.690 02:43:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:18.690 02:43:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oBl 00:24:18.690 02:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.690 02:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.690 02:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.690 02:43:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:18.690 02:43:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3y7 00:24:18.690 02:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.690 02:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.690 02:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.690 02:43:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:18.690 02:43:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.iod 00:24:18.690 02:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.690 02:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.690 02:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.690 02:43:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:18.690 02:43:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4vZ 00:24:18.690 02:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.690 02:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.690 02:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.690 02:43:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:18.690 02:43:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yLs 00:24:18.690 02:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.690 02:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.690 02:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.690 02:43:52 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:18.690 02:43:52 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:18.690 02:43:52 -- nvmf/common.sh@717 -- # local ip 00:24:18.690 02:43:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.690 02:43:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.690 02:43:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.690 02:43:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.690 02:43:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.690 02:43:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.690 02:43:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.690 02:43:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.690 02:43:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.690 02:43:52 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:18.690 02:43:52 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:18.690 02:43:52 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:18.690 02:43:52 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:18.690 02:43:52 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:18.690 02:43:52 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:18.690 02:43:52 -- nvmf/common.sh@628 -- # local block nvme 00:24:18.690 02:43:52 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:18.690 02:43:52 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:18.690 02:43:52 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:18.690 02:43:52 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:21.992 Waiting for block devices as requested 00:24:21.992 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:21.992 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:21.992 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:22.253 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:22.253 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:22.253 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:22.253 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:22.515 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:22.515 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:22.775 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:22.775 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:22.775 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:22.775 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:23.036 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:23.036 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:23.036 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:23.036 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:23.979 02:43:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:23.979 02:43:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:23.979 02:43:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:23.979 02:43:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:23.979 02:43:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:23.979 02:43:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:23.979 02:43:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:23.979 02:43:57 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:23.979 02:43:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:23.979 No valid GPT data, bailing 00:24:23.979 02:43:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:23.979 02:43:57 -- scripts/common.sh@391 -- # pt= 00:24:23.979 02:43:57 -- scripts/common.sh@392 -- # return 1 00:24:23.979 02:43:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:23.979 02:43:57 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:23.979 02:43:57 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:23.979 02:43:57 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:23.979 02:43:57 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:23.979 02:43:57 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:23.979 02:43:57 -- nvmf/common.sh@656 -- # echo 1 00:24:23.979 02:43:57 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:23.979 02:43:57 -- nvmf/common.sh@658 -- # echo 1 00:24:23.979 02:43:57 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:23.979 02:43:57 -- nvmf/common.sh@661 -- # echo tcp 00:24:23.979 02:43:57 -- nvmf/common.sh@662 -- # echo 4420 00:24:23.979 02:43:57 -- nvmf/common.sh@663 -- # echo ipv4 00:24:23.979 02:43:57 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:23.979 02:43:57 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:24:23.979 00:24:23.979 Discovery Log Number of Records 2, Generation counter 2 00:24:23.980 =====Discovery Log Entry 0====== 00:24:23.980 trtype: tcp 00:24:23.980 adrfam: ipv4 00:24:23.980 subtype: current discovery subsystem 00:24:23.980 treq: not specified, sq flow control disable supported 00:24:23.980 portid: 1 00:24:23.980 trsvcid: 4420 00:24:23.980 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:23.980 traddr: 10.0.0.1 00:24:23.980 eflags: none 00:24:23.980 sectype: none 00:24:23.980 =====Discovery Log Entry 1====== 00:24:23.980 trtype: tcp 00:24:23.980 adrfam: ipv4 00:24:23.980 subtype: nvme subsystem 00:24:23.980 treq: not specified, sq flow control disable supported 00:24:23.980 portid: 1 00:24:23.980 trsvcid: 4420 00:24:23.980 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:23.980 traddr: 10.0.0.1 00:24:23.980 eflags: none 00:24:23.980 sectype: none 00:24:23.980 02:43:57 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:23.980 02:43:57 -- host/auth.sh@37 -- # echo 0 00:24:23.980 02:43:57 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:23.980 02:43:57 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:23.980 02:43:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.980 02:43:57 -- host/auth.sh@44 -- # digest=sha256 00:24:23.980 02:43:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.980 02:43:57 -- host/auth.sh@44 -- # keyid=1 00:24:23.980 02:43:57 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:23.980 02:43:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:23.980 02:43:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:23.980 02:43:57 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:23.980 02:43:57 -- host/auth.sh@100 -- # IFS=, 00:24:23.980 02:43:57 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:23.980 02:43:57 -- host/auth.sh@100 -- # IFS=, 00:24:23.980 02:43:57 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:23.980 02:43:57 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:23.980 02:43:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.980 02:43:57 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:23.980 02:43:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:23.980 02:43:57 -- host/auth.sh@68 -- # keyid=1 00:24:23.980 02:43:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:23.980 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.980 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:23.980 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.980 02:43:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.980 02:43:57 -- nvmf/common.sh@717 -- # local ip 00:24:23.980 02:43:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.980 02:43:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.980 02:43:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.980 02:43:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.980 02:43:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.980 02:43:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.980 02:43:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.980 02:43:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.980 02:43:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.980 02:43:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:23.980 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.980 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:23.980 nvme0n1 00:24:23.980 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.980 02:43:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.980 02:43:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.980 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.980 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:23.980 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.980 02:43:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.980 02:43:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.980 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.980 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:23.980 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.980 02:43:57 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:23.980 02:43:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.980 02:43:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.980 02:43:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:23.980 02:43:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.980 02:43:57 -- host/auth.sh@44 -- # digest=sha256 00:24:23.980 02:43:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.980 02:43:57 -- host/auth.sh@44 -- # keyid=0 00:24:23.980 02:43:57 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:23.980 02:43:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:23.980 02:43:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:23.980 02:43:57 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:23.980 02:43:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:23.980 02:43:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.980 02:43:57 -- host/auth.sh@68 -- # digest=sha256 00:24:23.980 02:43:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:23.980 02:43:57 -- host/auth.sh@68 -- # keyid=0 00:24:23.980 02:43:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.980 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.980 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:23.980 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.980 02:43:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.980 02:43:57 -- nvmf/common.sh@717 -- # local ip 00:24:23.980 02:43:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.980 02:43:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.980 02:43:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.980 02:43:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.980 02:43:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.980 02:43:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.980 02:43:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.980 02:43:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.980 02:43:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.980 02:43:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:23.980 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.980 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.241 nvme0n1 00:24:24.241 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.241 02:43:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.241 02:43:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.242 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.242 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.242 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.242 02:43:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.242 02:43:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.242 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.242 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.242 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.242 02:43:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.242 02:43:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:24.242 02:43:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.242 02:43:57 -- host/auth.sh@44 -- # digest=sha256 00:24:24.242 02:43:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.242 02:43:57 -- host/auth.sh@44 -- # keyid=1 00:24:24.242 02:43:57 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:24.242 02:43:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:24.242 02:43:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.242 02:43:57 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:24.242 02:43:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:24.242 02:43:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.242 02:43:57 -- host/auth.sh@68 -- # digest=sha256 00:24:24.242 02:43:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.242 02:43:57 -- host/auth.sh@68 -- # keyid=1 00:24:24.242 02:43:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.242 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.242 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.242 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.242 02:43:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.242 02:43:57 -- nvmf/common.sh@717 -- # local ip 00:24:24.242 02:43:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.242 02:43:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.242 02:43:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.242 02:43:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.242 02:43:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.242 02:43:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.242 02:43:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.242 02:43:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.242 02:43:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.242 02:43:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:24.242 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.242 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.503 nvme0n1 00:24:24.503 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.503 02:43:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.503 02:43:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.503 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.503 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.503 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.503 02:43:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.503 02:43:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.503 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.503 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.503 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.503 02:43:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.503 02:43:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:24.503 02:43:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.503 02:43:57 -- host/auth.sh@44 -- # digest=sha256 00:24:24.503 02:43:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.503 02:43:57 -- host/auth.sh@44 -- # keyid=2 00:24:24.503 02:43:57 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:24.503 02:43:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:24.503 02:43:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.503 02:43:57 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:24.503 02:43:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:24.503 02:43:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.503 02:43:57 -- host/auth.sh@68 -- # digest=sha256 00:24:24.503 02:43:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.503 02:43:57 -- host/auth.sh@68 -- # keyid=2 00:24:24.503 02:43:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.503 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.503 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.503 02:43:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.504 02:43:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.504 02:43:57 -- nvmf/common.sh@717 -- # local ip 00:24:24.504 02:43:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.504 02:43:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.504 02:43:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.504 02:43:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.504 02:43:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.504 02:43:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.504 02:43:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.504 02:43:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.504 02:43:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.504 02:43:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:24.504 02:43:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.504 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.504 nvme0n1 00:24:24.504 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.504 02:43:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.765 02:43:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.765 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.765 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:24.765 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.765 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.765 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:24.765 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.765 02:43:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:24.765 02:43:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.765 02:43:58 -- host/auth.sh@44 -- # digest=sha256 00:24:24.765 02:43:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.765 02:43:58 -- host/auth.sh@44 -- # keyid=3 00:24:24.765 02:43:58 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:24.765 02:43:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:24.765 02:43:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.765 02:43:58 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:24.765 02:43:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:24.765 02:43:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.765 02:43:58 -- host/auth.sh@68 -- # digest=sha256 00:24:24.765 02:43:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.765 02:43:58 -- host/auth.sh@68 -- # keyid=3 00:24:24.765 02:43:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.765 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.765 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:24.765 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.765 02:43:58 -- nvmf/common.sh@717 -- # local ip 00:24:24.765 02:43:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.765 02:43:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.765 02:43:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.765 02:43:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.765 02:43:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.765 02:43:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.765 02:43:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.765 02:43:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.765 02:43:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.765 02:43:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:24.765 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.765 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:24.765 nvme0n1 00:24:24.765 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.765 02:43:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.765 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.765 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:24.765 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.765 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.765 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:24.765 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.765 02:43:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.765 02:43:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:24.765 02:43:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.765 02:43:58 -- host/auth.sh@44 -- # digest=sha256 00:24:24.765 02:43:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.765 02:43:58 -- host/auth.sh@44 -- # keyid=4 00:24:24.766 02:43:58 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:24.766 02:43:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:24.766 02:43:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.766 02:43:58 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:24.766 02:43:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:24.766 02:43:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.766 02:43:58 -- host/auth.sh@68 -- # digest=sha256 00:24:24.766 02:43:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.766 02:43:58 -- host/auth.sh@68 -- # keyid=4 00:24:24.766 02:43:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.766 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.766 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.027 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.027 02:43:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.027 02:43:58 -- nvmf/common.sh@717 -- # local ip 00:24:25.027 02:43:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.027 02:43:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.027 02:43:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.027 02:43:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.027 02:43:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.027 02:43:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.027 02:43:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.027 02:43:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.027 02:43:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.027 02:43:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.027 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.027 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.027 nvme0n1 00:24:25.027 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.027 02:43:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.027 02:43:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.027 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.027 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.027 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.027 02:43:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.027 02:43:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.027 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.027 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.027 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.027 02:43:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.027 02:43:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.027 02:43:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:25.027 02:43:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.027 02:43:58 -- host/auth.sh@44 -- # digest=sha256 00:24:25.027 02:43:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.027 02:43:58 -- host/auth.sh@44 -- # keyid=0 00:24:25.027 02:43:58 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:25.027 02:43:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:25.027 02:43:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.027 02:43:58 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:25.027 02:43:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:25.027 02:43:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.027 02:43:58 -- host/auth.sh@68 -- # digest=sha256 00:24:25.027 02:43:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.027 02:43:58 -- host/auth.sh@68 -- # keyid=0 00:24:25.027 02:43:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:25.027 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.027 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.027 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.027 02:43:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.027 02:43:58 -- nvmf/common.sh@717 -- # local ip 00:24:25.027 02:43:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.027 02:43:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.027 02:43:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.027 02:43:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.027 02:43:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.027 02:43:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.027 02:43:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.027 02:43:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.027 02:43:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.027 02:43:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:25.027 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.027 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.289 nvme0n1 00:24:25.289 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.289 02:43:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.289 02:43:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.289 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.289 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.289 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.289 02:43:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.289 02:43:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.289 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.289 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.289 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.289 02:43:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.289 02:43:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:25.289 02:43:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.289 02:43:58 -- host/auth.sh@44 -- # digest=sha256 00:24:25.289 02:43:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.289 02:43:58 -- host/auth.sh@44 -- # keyid=1 00:24:25.289 02:43:58 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:25.289 02:43:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:25.289 02:43:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.289 02:43:58 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:25.289 02:43:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:25.289 02:43:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.289 02:43:58 -- host/auth.sh@68 -- # digest=sha256 00:24:25.289 02:43:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.289 02:43:58 -- host/auth.sh@68 -- # keyid=1 00:24:25.289 02:43:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:25.289 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.289 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.289 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.289 02:43:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.289 02:43:58 -- nvmf/common.sh@717 -- # local ip 00:24:25.289 02:43:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.289 02:43:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.289 02:43:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.289 02:43:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.289 02:43:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.289 02:43:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.289 02:43:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.289 02:43:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.289 02:43:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.289 02:43:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:25.289 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.289 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.551 nvme0n1 00:24:25.551 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.551 02:43:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.551 02:43:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.551 02:43:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.551 02:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.551 02:43:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.551 02:43:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.551 02:43:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.551 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.551 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.551 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.551 02:43:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.551 02:43:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:25.551 02:43:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.551 02:43:59 -- host/auth.sh@44 -- # digest=sha256 00:24:25.551 02:43:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.551 02:43:59 -- host/auth.sh@44 -- # keyid=2 00:24:25.551 02:43:59 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:25.551 02:43:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:25.551 02:43:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.551 02:43:59 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:25.551 02:43:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:25.551 02:43:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.551 02:43:59 -- host/auth.sh@68 -- # digest=sha256 00:24:25.551 02:43:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.551 02:43:59 -- host/auth.sh@68 -- # keyid=2 00:24:25.552 02:43:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:25.552 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.552 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.552 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.552 02:43:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.552 02:43:59 -- nvmf/common.sh@717 -- # local ip 00:24:25.552 02:43:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.552 02:43:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.552 02:43:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.552 02:43:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.552 02:43:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.552 02:43:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.552 02:43:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.552 02:43:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.552 02:43:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.552 02:43:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:25.552 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.552 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 nvme0n1 00:24:25.813 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.813 02:43:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.813 02:43:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.813 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.813 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.813 02:43:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.813 02:43:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.813 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.813 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.813 02:43:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.813 02:43:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:25.813 02:43:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.813 02:43:59 -- host/auth.sh@44 -- # digest=sha256 00:24:25.813 02:43:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.813 02:43:59 -- host/auth.sh@44 -- # keyid=3 00:24:25.813 02:43:59 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:25.813 02:43:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:25.813 02:43:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.813 02:43:59 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:25.813 02:43:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:25.813 02:43:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.813 02:43:59 -- host/auth.sh@68 -- # digest=sha256 00:24:25.813 02:43:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.813 02:43:59 -- host/auth.sh@68 -- # keyid=3 00:24:25.813 02:43:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:25.813 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.813 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:25.813 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.813 02:43:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.813 02:43:59 -- nvmf/common.sh@717 -- # local ip 00:24:25.813 02:43:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.813 02:43:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.813 02:43:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.813 02:43:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.813 02:43:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.813 02:43:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.813 02:43:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.813 02:43:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.813 02:43:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.813 02:43:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:25.813 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.813 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 nvme0n1 00:24:26.074 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.074 02:43:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.074 02:43:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.074 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.074 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.074 02:43:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.074 02:43:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.074 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.074 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.074 02:43:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.074 02:43:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:26.074 02:43:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.074 02:43:59 -- host/auth.sh@44 -- # digest=sha256 00:24:26.074 02:43:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.074 02:43:59 -- host/auth.sh@44 -- # keyid=4 00:24:26.074 02:43:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:26.074 02:43:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:26.074 02:43:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.074 02:43:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:26.074 02:43:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:26.074 02:43:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.074 02:43:59 -- host/auth.sh@68 -- # digest=sha256 00:24:26.074 02:43:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.074 02:43:59 -- host/auth.sh@68 -- # keyid=4 00:24:26.075 02:43:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:26.075 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.075 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.075 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.075 02:43:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.075 02:43:59 -- nvmf/common.sh@717 -- # local ip 00:24:26.075 02:43:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.075 02:43:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.075 02:43:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.075 02:43:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.075 02:43:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.075 02:43:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.075 02:43:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.075 02:43:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.075 02:43:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.075 02:43:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.075 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.075 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.075 nvme0n1 00:24:26.075 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.075 02:43:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.075 02:43:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.075 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.075 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.336 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.336 02:43:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.336 02:43:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.336 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.336 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.336 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.336 02:43:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.336 02:43:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.336 02:43:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:26.336 02:43:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.336 02:43:59 -- host/auth.sh@44 -- # digest=sha256 00:24:26.336 02:43:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.336 02:43:59 -- host/auth.sh@44 -- # keyid=0 00:24:26.336 02:43:59 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:26.336 02:43:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:26.336 02:43:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.336 02:43:59 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:26.336 02:43:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:26.336 02:43:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.336 02:43:59 -- host/auth.sh@68 -- # digest=sha256 00:24:26.336 02:43:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.336 02:43:59 -- host/auth.sh@68 -- # keyid=0 00:24:26.336 02:43:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.336 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.336 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.336 02:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.336 02:43:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.336 02:43:59 -- nvmf/common.sh@717 -- # local ip 00:24:26.336 02:43:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.336 02:43:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.336 02:43:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.336 02:43:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.336 02:43:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.336 02:43:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.336 02:43:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.336 02:43:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.336 02:43:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.336 02:43:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:26.336 02:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.336 02:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.596 nvme0n1 00:24:26.596 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.596 02:44:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.596 02:44:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.597 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.597 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.597 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.597 02:44:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.597 02:44:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.597 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.597 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.597 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.597 02:44:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.597 02:44:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:26.597 02:44:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.597 02:44:00 -- host/auth.sh@44 -- # digest=sha256 00:24:26.597 02:44:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.597 02:44:00 -- host/auth.sh@44 -- # keyid=1 00:24:26.597 02:44:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:26.597 02:44:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:26.597 02:44:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.597 02:44:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:26.597 02:44:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:26.597 02:44:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.597 02:44:00 -- host/auth.sh@68 -- # digest=sha256 00:24:26.597 02:44:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.597 02:44:00 -- host/auth.sh@68 -- # keyid=1 00:24:26.597 02:44:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.597 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.597 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.597 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.597 02:44:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.597 02:44:00 -- nvmf/common.sh@717 -- # local ip 00:24:26.597 02:44:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.597 02:44:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.597 02:44:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.597 02:44:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.597 02:44:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.597 02:44:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.597 02:44:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.597 02:44:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.597 02:44:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.597 02:44:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:26.597 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.597 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.857 nvme0n1 00:24:26.857 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.857 02:44:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.857 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.857 02:44:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.857 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.857 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.857 02:44:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.857 02:44:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.857 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.857 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.857 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.857 02:44:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.857 02:44:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:26.857 02:44:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.857 02:44:00 -- host/auth.sh@44 -- # digest=sha256 00:24:26.857 02:44:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.857 02:44:00 -- host/auth.sh@44 -- # keyid=2 00:24:26.857 02:44:00 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:26.857 02:44:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:26.857 02:44:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.857 02:44:00 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:26.857 02:44:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:26.857 02:44:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.857 02:44:00 -- host/auth.sh@68 -- # digest=sha256 00:24:26.857 02:44:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.857 02:44:00 -- host/auth.sh@68 -- # keyid=2 00:24:26.857 02:44:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:26.857 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.857 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.857 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.857 02:44:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.857 02:44:00 -- nvmf/common.sh@717 -- # local ip 00:24:26.857 02:44:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.857 02:44:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.857 02:44:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.857 02:44:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.857 02:44:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.857 02:44:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.857 02:44:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.857 02:44:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.857 02:44:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.857 02:44:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.857 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.857 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.118 nvme0n1 00:24:27.118 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.118 02:44:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.118 02:44:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.118 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.118 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.118 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.118 02:44:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.118 02:44:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.118 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.118 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.379 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.379 02:44:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.379 02:44:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:27.379 02:44:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.379 02:44:00 -- host/auth.sh@44 -- # digest=sha256 00:24:27.379 02:44:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.379 02:44:00 -- host/auth.sh@44 -- # keyid=3 00:24:27.379 02:44:00 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:27.379 02:44:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:27.379 02:44:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.379 02:44:00 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:27.379 02:44:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:27.379 02:44:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.379 02:44:00 -- host/auth.sh@68 -- # digest=sha256 00:24:27.379 02:44:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.379 02:44:00 -- host/auth.sh@68 -- # keyid=3 00:24:27.379 02:44:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:27.379 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.379 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.379 02:44:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.379 02:44:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.379 02:44:00 -- nvmf/common.sh@717 -- # local ip 00:24:27.379 02:44:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.379 02:44:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.379 02:44:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.379 02:44:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.379 02:44:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.379 02:44:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.379 02:44:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.379 02:44:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.379 02:44:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.379 02:44:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:27.379 02:44:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.379 02:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.641 nvme0n1 00:24:27.641 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.641 02:44:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.641 02:44:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.641 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.641 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.641 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.641 02:44:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.641 02:44:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.641 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.641 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.641 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.641 02:44:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.641 02:44:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:27.641 02:44:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.641 02:44:01 -- host/auth.sh@44 -- # digest=sha256 00:24:27.641 02:44:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.641 02:44:01 -- host/auth.sh@44 -- # keyid=4 00:24:27.641 02:44:01 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:27.641 02:44:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:27.641 02:44:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.641 02:44:01 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:27.641 02:44:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:27.641 02:44:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.641 02:44:01 -- host/auth.sh@68 -- # digest=sha256 00:24:27.641 02:44:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.641 02:44:01 -- host/auth.sh@68 -- # keyid=4 00:24:27.641 02:44:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:27.641 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.641 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.641 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.641 02:44:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.641 02:44:01 -- nvmf/common.sh@717 -- # local ip 00:24:27.641 02:44:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.641 02:44:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.641 02:44:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.641 02:44:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.641 02:44:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.641 02:44:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.641 02:44:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.641 02:44:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.641 02:44:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.641 02:44:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.641 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.641 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 nvme0n1 00:24:27.902 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.902 02:44:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.902 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.902 02:44:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.902 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.902 02:44:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.902 02:44:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.902 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.902 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.902 02:44:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.902 02:44:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.902 02:44:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:27.902 02:44:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.902 02:44:01 -- host/auth.sh@44 -- # digest=sha256 00:24:27.902 02:44:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.902 02:44:01 -- host/auth.sh@44 -- # keyid=0 00:24:27.902 02:44:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:27.902 02:44:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:27.902 02:44:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:27.902 02:44:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:27.902 02:44:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:27.902 02:44:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.902 02:44:01 -- host/auth.sh@68 -- # digest=sha256 00:24:27.902 02:44:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:27.902 02:44:01 -- host/auth.sh@68 -- # keyid=0 00:24:27.902 02:44:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:27.902 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.902 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:27.902 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.902 02:44:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.902 02:44:01 -- nvmf/common.sh@717 -- # local ip 00:24:27.902 02:44:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.902 02:44:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.902 02:44:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.902 02:44:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.902 02:44:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.902 02:44:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.902 02:44:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.902 02:44:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.902 02:44:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.902 02:44:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:27.902 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.902 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.475 nvme0n1 00:24:28.475 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.475 02:44:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.475 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.475 02:44:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.475 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.475 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.475 02:44:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.475 02:44:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.475 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.475 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.475 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.475 02:44:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.475 02:44:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:28.475 02:44:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.475 02:44:01 -- host/auth.sh@44 -- # digest=sha256 00:24:28.475 02:44:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.475 02:44:01 -- host/auth.sh@44 -- # keyid=1 00:24:28.475 02:44:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:28.475 02:44:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:28.475 02:44:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:28.475 02:44:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:28.475 02:44:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:28.475 02:44:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.475 02:44:01 -- host/auth.sh@68 -- # digest=sha256 00:24:28.475 02:44:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:28.475 02:44:01 -- host/auth.sh@68 -- # keyid=1 00:24:28.475 02:44:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:28.475 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.475 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.475 02:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.475 02:44:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.475 02:44:01 -- nvmf/common.sh@717 -- # local ip 00:24:28.475 02:44:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.475 02:44:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.475 02:44:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.475 02:44:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.475 02:44:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.475 02:44:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.475 02:44:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.475 02:44:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.475 02:44:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.475 02:44:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:28.475 02:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.475 02:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:28.736 nvme0n1 00:24:28.736 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.736 02:44:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.736 02:44:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.736 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.736 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:28.736 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.997 02:44:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.997 02:44:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.997 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.997 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:28.997 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.997 02:44:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.997 02:44:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:28.997 02:44:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.997 02:44:02 -- host/auth.sh@44 -- # digest=sha256 00:24:28.997 02:44:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.997 02:44:02 -- host/auth.sh@44 -- # keyid=2 00:24:28.997 02:44:02 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:28.997 02:44:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:28.997 02:44:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:28.997 02:44:02 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:28.997 02:44:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:28.997 02:44:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.997 02:44:02 -- host/auth.sh@68 -- # digest=sha256 00:24:28.997 02:44:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:28.997 02:44:02 -- host/auth.sh@68 -- # keyid=2 00:24:28.997 02:44:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:28.997 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.997 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:28.997 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.997 02:44:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.997 02:44:02 -- nvmf/common.sh@717 -- # local ip 00:24:28.997 02:44:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.997 02:44:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.997 02:44:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.997 02:44:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.997 02:44:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.997 02:44:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.997 02:44:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.997 02:44:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.997 02:44:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.997 02:44:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.997 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.997 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.258 nvme0n1 00:24:29.258 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.258 02:44:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.258 02:44:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.258 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.258 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.258 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.533 02:44:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.533 02:44:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.533 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.533 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.533 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.533 02:44:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.534 02:44:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:29.534 02:44:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.534 02:44:02 -- host/auth.sh@44 -- # digest=sha256 00:24:29.534 02:44:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.534 02:44:02 -- host/auth.sh@44 -- # keyid=3 00:24:29.534 02:44:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:29.534 02:44:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:29.534 02:44:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.534 02:44:02 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:29.534 02:44:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:29.534 02:44:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.534 02:44:02 -- host/auth.sh@68 -- # digest=sha256 00:24:29.534 02:44:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.534 02:44:02 -- host/auth.sh@68 -- # keyid=3 00:24:29.534 02:44:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:29.534 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.534 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.534 02:44:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.534 02:44:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.534 02:44:02 -- nvmf/common.sh@717 -- # local ip 00:24:29.534 02:44:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.534 02:44:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.534 02:44:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.534 02:44:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.534 02:44:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.534 02:44:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.534 02:44:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.534 02:44:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.534 02:44:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.534 02:44:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:29.534 02:44:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.534 02:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.797 nvme0n1 00:24:29.797 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.797 02:44:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.797 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.797 02:44:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.797 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.797 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.797 02:44:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.797 02:44:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.797 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.797 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.797 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.797 02:44:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.797 02:44:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:29.797 02:44:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.798 02:44:03 -- host/auth.sh@44 -- # digest=sha256 00:24:29.798 02:44:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.798 02:44:03 -- host/auth.sh@44 -- # keyid=4 00:24:29.798 02:44:03 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:29.798 02:44:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:29.798 02:44:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.798 02:44:03 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:29.798 02:44:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:29.798 02:44:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.798 02:44:03 -- host/auth.sh@68 -- # digest=sha256 00:24:29.798 02:44:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.798 02:44:03 -- host/auth.sh@68 -- # keyid=4 00:24:29.798 02:44:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:29.798 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.798 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:29.798 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.798 02:44:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.798 02:44:03 -- nvmf/common.sh@717 -- # local ip 00:24:29.798 02:44:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.798 02:44:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.798 02:44:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.798 02:44:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.798 02:44:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.798 02:44:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.798 02:44:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.798 02:44:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.798 02:44:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.059 02:44:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.059 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.059 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.319 nvme0n1 00:24:30.319 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.319 02:44:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.319 02:44:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.319 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.319 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.319 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.319 02:44:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.319 02:44:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.319 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.319 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.319 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.319 02:44:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.319 02:44:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.319 02:44:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:30.319 02:44:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.319 02:44:03 -- host/auth.sh@44 -- # digest=sha256 00:24:30.319 02:44:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.319 02:44:03 -- host/auth.sh@44 -- # keyid=0 00:24:30.319 02:44:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:30.319 02:44:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:30.319 02:44:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:30.319 02:44:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:30.319 02:44:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:30.319 02:44:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.319 02:44:03 -- host/auth.sh@68 -- # digest=sha256 00:24:30.319 02:44:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:30.319 02:44:03 -- host/auth.sh@68 -- # keyid=0 00:24:30.319 02:44:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:30.319 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.319 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.319 02:44:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.319 02:44:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.319 02:44:03 -- nvmf/common.sh@717 -- # local ip 00:24:30.319 02:44:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.319 02:44:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.319 02:44:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.319 02:44:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.319 02:44:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.319 02:44:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.319 02:44:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.319 02:44:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.319 02:44:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.319 02:44:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:30.319 02:44:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.319 02:44:03 -- common/autotest_common.sh@10 -- # set +x 00:24:31.261 nvme0n1 00:24:31.261 02:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.261 02:44:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.261 02:44:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.261 02:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.261 02:44:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.261 02:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.261 02:44:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.261 02:44:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.261 02:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.261 02:44:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.261 02:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.261 02:44:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.261 02:44:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:31.261 02:44:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.261 02:44:04 -- host/auth.sh@44 -- # digest=sha256 00:24:31.261 02:44:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.261 02:44:04 -- host/auth.sh@44 -- # keyid=1 00:24:31.261 02:44:04 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:31.261 02:44:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:31.261 02:44:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.261 02:44:04 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:31.261 02:44:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:31.261 02:44:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.261 02:44:04 -- host/auth.sh@68 -- # digest=sha256 00:24:31.261 02:44:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.261 02:44:04 -- host/auth.sh@68 -- # keyid=1 00:24:31.261 02:44:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:31.261 02:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.261 02:44:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.261 02:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.261 02:44:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.261 02:44:04 -- nvmf/common.sh@717 -- # local ip 00:24:31.261 02:44:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.261 02:44:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.261 02:44:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.261 02:44:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.261 02:44:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.261 02:44:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.261 02:44:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.261 02:44:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.261 02:44:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.261 02:44:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:31.261 02:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.261 02:44:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.832 nvme0n1 00:24:31.833 02:44:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.094 02:44:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.094 02:44:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.094 02:44:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.094 02:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.094 02:44:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.094 02:44:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.094 02:44:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.094 02:44:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.094 02:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.094 02:44:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.094 02:44:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.094 02:44:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:32.094 02:44:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.094 02:44:05 -- host/auth.sh@44 -- # digest=sha256 00:24:32.094 02:44:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.094 02:44:05 -- host/auth.sh@44 -- # keyid=2 00:24:32.094 02:44:05 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:32.094 02:44:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:32.094 02:44:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:32.094 02:44:05 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:32.094 02:44:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:32.094 02:44:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.094 02:44:05 -- host/auth.sh@68 -- # digest=sha256 00:24:32.094 02:44:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:32.094 02:44:05 -- host/auth.sh@68 -- # keyid=2 00:24:32.094 02:44:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:32.094 02:44:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.094 02:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.094 02:44:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.094 02:44:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.094 02:44:05 -- nvmf/common.sh@717 -- # local ip 00:24:32.094 02:44:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.094 02:44:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.094 02:44:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.094 02:44:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.094 02:44:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.094 02:44:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.094 02:44:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.094 02:44:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.094 02:44:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.094 02:44:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.094 02:44:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.094 02:44:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.666 nvme0n1 00:24:32.666 02:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.666 02:44:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.666 02:44:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.666 02:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.666 02:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.666 02:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.666 02:44:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.666 02:44:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.666 02:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.666 02:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.928 02:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.928 02:44:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.928 02:44:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:32.928 02:44:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.928 02:44:06 -- host/auth.sh@44 -- # digest=sha256 00:24:32.928 02:44:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.928 02:44:06 -- host/auth.sh@44 -- # keyid=3 00:24:32.928 02:44:06 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:32.928 02:44:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:32.928 02:44:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:32.928 02:44:06 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:32.928 02:44:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:32.928 02:44:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.928 02:44:06 -- host/auth.sh@68 -- # digest=sha256 00:24:32.928 02:44:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:32.928 02:44:06 -- host/auth.sh@68 -- # keyid=3 00:24:32.928 02:44:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:32.928 02:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.928 02:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.928 02:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.928 02:44:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.928 02:44:06 -- nvmf/common.sh@717 -- # local ip 00:24:32.928 02:44:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.928 02:44:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.928 02:44:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.928 02:44:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.928 02:44:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.928 02:44:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.928 02:44:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.928 02:44:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.928 02:44:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.928 02:44:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:32.928 02:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.928 02:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.500 nvme0n1 00:24:33.500 02:44:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.500 02:44:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.500 02:44:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.500 02:44:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.500 02:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.500 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.500 02:44:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.500 02:44:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.500 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.500 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:33.500 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.500 02:44:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.500 02:44:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:33.500 02:44:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.500 02:44:07 -- host/auth.sh@44 -- # digest=sha256 00:24:33.500 02:44:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.500 02:44:07 -- host/auth.sh@44 -- # keyid=4 00:24:33.500 02:44:07 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:33.500 02:44:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:33.500 02:44:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.500 02:44:07 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:33.500 02:44:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:33.500 02:44:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.500 02:44:07 -- host/auth.sh@68 -- # digest=sha256 00:24:33.500 02:44:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.501 02:44:07 -- host/auth.sh@68 -- # keyid=4 00:24:33.501 02:44:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:33.501 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.501 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:33.501 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.501 02:44:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.501 02:44:07 -- nvmf/common.sh@717 -- # local ip 00:24:33.501 02:44:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.501 02:44:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.501 02:44:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.501 02:44:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.501 02:44:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.501 02:44:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.501 02:44:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.501 02:44:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.501 02:44:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.501 02:44:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.501 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.501 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 nvme0n1 00:24:34.444 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.444 02:44:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.444 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.444 02:44:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.444 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:07 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:34.444 02:44:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.444 02:44:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.444 02:44:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:34.444 02:44:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.444 02:44:07 -- host/auth.sh@44 -- # digest=sha384 00:24:34.444 02:44:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.444 02:44:07 -- host/auth.sh@44 -- # keyid=0 00:24:34.444 02:44:07 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:34.444 02:44:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:34.444 02:44:07 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.444 02:44:07 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:34.444 02:44:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:34.444 02:44:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.444 02:44:07 -- host/auth.sh@68 -- # digest=sha384 00:24:34.444 02:44:07 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.444 02:44:07 -- host/auth.sh@68 -- # keyid=0 00:24:34.444 02:44:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.444 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.444 02:44:07 -- nvmf/common.sh@717 -- # local ip 00:24:34.444 02:44:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.444 02:44:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.444 02:44:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.444 02:44:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.444 02:44:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.444 02:44:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.444 02:44:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.444 02:44:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.444 02:44:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.444 02:44:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:34.444 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 nvme0n1 00:24:34.444 02:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.444 02:44:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.444 02:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.444 02:44:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.444 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.444 02:44:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.444 02:44:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:34.444 02:44:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.444 02:44:08 -- host/auth.sh@44 -- # digest=sha384 00:24:34.444 02:44:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.444 02:44:08 -- host/auth.sh@44 -- # keyid=1 00:24:34.444 02:44:08 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:34.444 02:44:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:34.444 02:44:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.444 02:44:08 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:34.444 02:44:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:34.444 02:44:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.444 02:44:08 -- host/auth.sh@68 -- # digest=sha384 00:24:34.444 02:44:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.444 02:44:08 -- host/auth.sh@68 -- # keyid=1 00:24:34.444 02:44:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.444 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.444 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.705 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.705 02:44:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.705 02:44:08 -- nvmf/common.sh@717 -- # local ip 00:24:34.705 02:44:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.705 02:44:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.705 02:44:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.705 02:44:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.705 02:44:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.705 02:44:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.705 02:44:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.705 02:44:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.705 02:44:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.705 02:44:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:34.705 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.705 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.705 nvme0n1 00:24:34.705 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.705 02:44:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.705 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.705 02:44:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.705 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.705 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.705 02:44:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.705 02:44:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.705 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.705 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.705 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.705 02:44:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.705 02:44:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:34.705 02:44:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.705 02:44:08 -- host/auth.sh@44 -- # digest=sha384 00:24:34.705 02:44:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.705 02:44:08 -- host/auth.sh@44 -- # keyid=2 00:24:34.705 02:44:08 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:34.705 02:44:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:34.705 02:44:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.705 02:44:08 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:34.705 02:44:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:34.705 02:44:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.705 02:44:08 -- host/auth.sh@68 -- # digest=sha384 00:24:34.705 02:44:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.705 02:44:08 -- host/auth.sh@68 -- # keyid=2 00:24:34.705 02:44:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.705 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.705 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.705 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.705 02:44:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.705 02:44:08 -- nvmf/common.sh@717 -- # local ip 00:24:34.705 02:44:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.705 02:44:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.705 02:44:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.705 02:44:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.705 02:44:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.705 02:44:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.705 02:44:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.705 02:44:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.705 02:44:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.705 02:44:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.705 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.705 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.967 nvme0n1 00:24:34.967 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.967 02:44:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.967 02:44:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.967 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.967 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.967 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.967 02:44:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.967 02:44:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.967 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.967 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.967 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.967 02:44:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.967 02:44:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:34.967 02:44:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.967 02:44:08 -- host/auth.sh@44 -- # digest=sha384 00:24:34.967 02:44:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.967 02:44:08 -- host/auth.sh@44 -- # keyid=3 00:24:34.967 02:44:08 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:34.967 02:44:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:34.967 02:44:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.967 02:44:08 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:34.967 02:44:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:34.967 02:44:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.967 02:44:08 -- host/auth.sh@68 -- # digest=sha384 00:24:34.967 02:44:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.967 02:44:08 -- host/auth.sh@68 -- # keyid=3 00:24:34.967 02:44:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:34.967 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.967 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:34.967 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.967 02:44:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.967 02:44:08 -- nvmf/common.sh@717 -- # local ip 00:24:34.967 02:44:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.967 02:44:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.967 02:44:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.967 02:44:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.967 02:44:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.967 02:44:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.967 02:44:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.967 02:44:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.967 02:44:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.967 02:44:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:34.967 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.967 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.229 nvme0n1 00:24:35.229 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.229 02:44:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.229 02:44:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.229 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.229 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.229 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.229 02:44:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.229 02:44:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.229 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.229 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.229 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.229 02:44:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.229 02:44:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:35.229 02:44:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.229 02:44:08 -- host/auth.sh@44 -- # digest=sha384 00:24:35.229 02:44:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.229 02:44:08 -- host/auth.sh@44 -- # keyid=4 00:24:35.229 02:44:08 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:35.229 02:44:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:35.229 02:44:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.229 02:44:08 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:35.229 02:44:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:35.229 02:44:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.229 02:44:08 -- host/auth.sh@68 -- # digest=sha384 00:24:35.229 02:44:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.229 02:44:08 -- host/auth.sh@68 -- # keyid=4 00:24:35.229 02:44:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:35.229 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.229 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.229 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.229 02:44:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.229 02:44:08 -- nvmf/common.sh@717 -- # local ip 00:24:35.229 02:44:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.229 02:44:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.229 02:44:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.230 02:44:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.230 02:44:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.230 02:44:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.230 02:44:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.230 02:44:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.230 02:44:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.230 02:44:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.230 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.230 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.230 nvme0n1 00:24:35.230 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.230 02:44:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.230 02:44:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.230 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.230 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.230 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.490 02:44:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.490 02:44:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.490 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.490 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.490 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.490 02:44:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.490 02:44:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.490 02:44:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:35.490 02:44:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.490 02:44:08 -- host/auth.sh@44 -- # digest=sha384 00:24:35.490 02:44:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.490 02:44:08 -- host/auth.sh@44 -- # keyid=0 00:24:35.490 02:44:08 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:35.490 02:44:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:35.490 02:44:08 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:35.490 02:44:08 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:35.490 02:44:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:35.490 02:44:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.490 02:44:08 -- host/auth.sh@68 -- # digest=sha384 00:24:35.490 02:44:08 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:35.490 02:44:08 -- host/auth.sh@68 -- # keyid=0 00:24:35.490 02:44:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.490 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.490 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.490 02:44:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.490 02:44:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.490 02:44:08 -- nvmf/common.sh@717 -- # local ip 00:24:35.490 02:44:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.490 02:44:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.490 02:44:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.490 02:44:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.490 02:44:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.490 02:44:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.490 02:44:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.490 02:44:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.490 02:44:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.490 02:44:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:35.490 02:44:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.490 02:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.490 nvme0n1 00:24:35.490 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.490 02:44:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.490 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.490 02:44:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.490 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.490 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.490 02:44:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.490 02:44:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.490 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.490 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.751 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.751 02:44:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.751 02:44:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:35.751 02:44:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.751 02:44:09 -- host/auth.sh@44 -- # digest=sha384 00:24:35.751 02:44:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.751 02:44:09 -- host/auth.sh@44 -- # keyid=1 00:24:35.751 02:44:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:35.751 02:44:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:35.751 02:44:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:35.751 02:44:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:35.751 02:44:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:35.751 02:44:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.751 02:44:09 -- host/auth.sh@68 -- # digest=sha384 00:24:35.751 02:44:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:35.751 02:44:09 -- host/auth.sh@68 -- # keyid=1 00:24:35.751 02:44:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.751 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.751 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.751 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.751 02:44:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.751 02:44:09 -- nvmf/common.sh@717 -- # local ip 00:24:35.751 02:44:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.751 02:44:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.751 02:44:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.751 02:44:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.751 02:44:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.751 02:44:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.751 02:44:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.751 02:44:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.751 02:44:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.751 02:44:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:35.751 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.751 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.751 nvme0n1 00:24:35.751 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.751 02:44:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.751 02:44:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.751 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.751 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.751 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.751 02:44:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.751 02:44:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.751 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.751 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:35.751 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.751 02:44:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.751 02:44:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:35.751 02:44:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.751 02:44:09 -- host/auth.sh@44 -- # digest=sha384 00:24:35.751 02:44:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.751 02:44:09 -- host/auth.sh@44 -- # keyid=2 00:24:35.751 02:44:09 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:35.751 02:44:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:35.751 02:44:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:35.751 02:44:09 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:35.751 02:44:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:35.751 02:44:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.751 02:44:09 -- host/auth.sh@68 -- # digest=sha384 00:24:35.751 02:44:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:35.751 02:44:09 -- host/auth.sh@68 -- # keyid=2 00:24:35.751 02:44:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:35.751 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.751 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 02:44:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.012 02:44:09 -- nvmf/common.sh@717 -- # local ip 00:24:36.012 02:44:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.012 02:44:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.012 02:44:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.012 02:44:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.012 02:44:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.012 02:44:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.012 02:44:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.012 02:44:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.012 02:44:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.012 02:44:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.012 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 nvme0n1 00:24:36.012 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 02:44:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.012 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 02:44:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.012 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 02:44:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.012 02:44:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.012 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 02:44:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.012 02:44:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:36.012 02:44:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.012 02:44:09 -- host/auth.sh@44 -- # digest=sha384 00:24:36.012 02:44:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.012 02:44:09 -- host/auth.sh@44 -- # keyid=3 00:24:36.012 02:44:09 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:36.012 02:44:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:36.012 02:44:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.012 02:44:09 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:36.012 02:44:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:36.012 02:44:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.012 02:44:09 -- host/auth.sh@68 -- # digest=sha384 00:24:36.012 02:44:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.012 02:44:09 -- host/auth.sh@68 -- # keyid=3 00:24:36.012 02:44:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:36.012 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 02:44:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.012 02:44:09 -- nvmf/common.sh@717 -- # local ip 00:24:36.012 02:44:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.012 02:44:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.012 02:44:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.012 02:44:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.012 02:44:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.012 02:44:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.012 02:44:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.012 02:44:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.012 02:44:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.012 02:44:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:36.012 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.274 nvme0n1 00:24:36.274 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.274 02:44:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.274 02:44:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.274 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.274 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.274 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.274 02:44:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.274 02:44:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.274 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.274 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.274 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.274 02:44:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.274 02:44:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:36.274 02:44:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.274 02:44:09 -- host/auth.sh@44 -- # digest=sha384 00:24:36.274 02:44:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.274 02:44:09 -- host/auth.sh@44 -- # keyid=4 00:24:36.274 02:44:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:36.274 02:44:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:36.274 02:44:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.274 02:44:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:36.274 02:44:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:36.274 02:44:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.274 02:44:09 -- host/auth.sh@68 -- # digest=sha384 00:24:36.274 02:44:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.274 02:44:09 -- host/auth.sh@68 -- # keyid=4 00:24:36.274 02:44:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:36.274 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.274 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.274 02:44:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.274 02:44:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.274 02:44:09 -- nvmf/common.sh@717 -- # local ip 00:24:36.274 02:44:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.274 02:44:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.274 02:44:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.274 02:44:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.274 02:44:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.274 02:44:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.274 02:44:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.274 02:44:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.274 02:44:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.274 02:44:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.274 02:44:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.274 02:44:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.535 nvme0n1 00:24:36.535 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.535 02:44:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.535 02:44:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.535 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.535 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.535 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.535 02:44:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.535 02:44:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.535 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.535 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.535 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.535 02:44:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.535 02:44:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.535 02:44:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:36.535 02:44:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.535 02:44:10 -- host/auth.sh@44 -- # digest=sha384 00:24:36.535 02:44:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.535 02:44:10 -- host/auth.sh@44 -- # keyid=0 00:24:36.535 02:44:10 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:36.535 02:44:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:36.535 02:44:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:36.535 02:44:10 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:36.535 02:44:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:36.535 02:44:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.535 02:44:10 -- host/auth.sh@68 -- # digest=sha384 00:24:36.535 02:44:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:36.535 02:44:10 -- host/auth.sh@68 -- # keyid=0 00:24:36.535 02:44:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:36.535 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.535 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.535 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.535 02:44:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.535 02:44:10 -- nvmf/common.sh@717 -- # local ip 00:24:36.535 02:44:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.535 02:44:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.535 02:44:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.535 02:44:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.535 02:44:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.535 02:44:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.535 02:44:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.535 02:44:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.535 02:44:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.535 02:44:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:36.535 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.535 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.797 nvme0n1 00:24:36.797 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.797 02:44:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.797 02:44:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.797 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.797 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.797 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.797 02:44:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.797 02:44:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.797 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.797 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.797 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.797 02:44:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.797 02:44:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:36.797 02:44:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.797 02:44:10 -- host/auth.sh@44 -- # digest=sha384 00:24:36.797 02:44:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.797 02:44:10 -- host/auth.sh@44 -- # keyid=1 00:24:36.797 02:44:10 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:36.797 02:44:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:36.797 02:44:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:36.797 02:44:10 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:36.797 02:44:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:36.797 02:44:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.797 02:44:10 -- host/auth.sh@68 -- # digest=sha384 00:24:36.797 02:44:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:36.797 02:44:10 -- host/auth.sh@68 -- # keyid=1 00:24:36.797 02:44:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:36.797 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.797 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:36.797 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.797 02:44:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.797 02:44:10 -- nvmf/common.sh@717 -- # local ip 00:24:36.797 02:44:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.797 02:44:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.064 02:44:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.064 02:44:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.064 02:44:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.064 02:44:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.064 02:44:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.064 02:44:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.064 02:44:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.064 02:44:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:37.064 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.064 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.064 nvme0n1 00:24:37.064 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.064 02:44:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.064 02:44:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.064 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.064 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.398 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.398 02:44:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.398 02:44:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.398 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.398 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.398 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.398 02:44:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.398 02:44:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:37.398 02:44:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.398 02:44:10 -- host/auth.sh@44 -- # digest=sha384 00:24:37.398 02:44:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.398 02:44:10 -- host/auth.sh@44 -- # keyid=2 00:24:37.398 02:44:10 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:37.398 02:44:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:37.398 02:44:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.398 02:44:10 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:37.398 02:44:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:37.398 02:44:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.398 02:44:10 -- host/auth.sh@68 -- # digest=sha384 00:24:37.398 02:44:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.398 02:44:10 -- host/auth.sh@68 -- # keyid=2 00:24:37.398 02:44:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.398 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.398 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.398 02:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.398 02:44:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.398 02:44:10 -- nvmf/common.sh@717 -- # local ip 00:24:37.398 02:44:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.398 02:44:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.398 02:44:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.398 02:44:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.398 02:44:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.398 02:44:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.398 02:44:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.398 02:44:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.398 02:44:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.398 02:44:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.398 02:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.398 02:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.688 nvme0n1 00:24:37.688 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.688 02:44:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.689 02:44:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.689 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.689 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.689 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.689 02:44:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.689 02:44:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.689 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.689 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.689 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.689 02:44:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.689 02:44:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:37.689 02:44:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.689 02:44:11 -- host/auth.sh@44 -- # digest=sha384 00:24:37.689 02:44:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.689 02:44:11 -- host/auth.sh@44 -- # keyid=3 00:24:37.689 02:44:11 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:37.689 02:44:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:37.689 02:44:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.689 02:44:11 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:37.689 02:44:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:37.689 02:44:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.689 02:44:11 -- host/auth.sh@68 -- # digest=sha384 00:24:37.689 02:44:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.689 02:44:11 -- host/auth.sh@68 -- # keyid=3 00:24:37.689 02:44:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.689 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.689 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.689 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.689 02:44:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.689 02:44:11 -- nvmf/common.sh@717 -- # local ip 00:24:37.689 02:44:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.689 02:44:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.689 02:44:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.689 02:44:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.689 02:44:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.689 02:44:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.689 02:44:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.689 02:44:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.689 02:44:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.689 02:44:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:37.689 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.689 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.949 nvme0n1 00:24:37.949 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.949 02:44:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.949 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.949 02:44:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.949 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.949 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.949 02:44:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.949 02:44:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.949 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.949 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.949 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.949 02:44:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.949 02:44:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:37.949 02:44:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.949 02:44:11 -- host/auth.sh@44 -- # digest=sha384 00:24:37.949 02:44:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.949 02:44:11 -- host/auth.sh@44 -- # keyid=4 00:24:37.949 02:44:11 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:37.949 02:44:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:37.949 02:44:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.949 02:44:11 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:37.949 02:44:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:37.949 02:44:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.949 02:44:11 -- host/auth.sh@68 -- # digest=sha384 00:24:37.949 02:44:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.949 02:44:11 -- host/auth.sh@68 -- # keyid=4 00:24:37.949 02:44:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.949 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.949 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:37.949 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.950 02:44:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.950 02:44:11 -- nvmf/common.sh@717 -- # local ip 00:24:37.950 02:44:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.950 02:44:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.950 02:44:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.950 02:44:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.950 02:44:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.950 02:44:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.950 02:44:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.950 02:44:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.950 02:44:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.950 02:44:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.950 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.950 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.210 nvme0n1 00:24:38.210 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.210 02:44:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.210 02:44:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.210 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.210 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.210 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.210 02:44:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.210 02:44:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.210 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.210 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.210 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.210 02:44:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.210 02:44:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.210 02:44:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:38.210 02:44:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.210 02:44:11 -- host/auth.sh@44 -- # digest=sha384 00:24:38.210 02:44:11 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.210 02:44:11 -- host/auth.sh@44 -- # keyid=0 00:24:38.210 02:44:11 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:38.210 02:44:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:38.210 02:44:11 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:38.210 02:44:11 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:38.210 02:44:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:38.210 02:44:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.210 02:44:11 -- host/auth.sh@68 -- # digest=sha384 00:24:38.210 02:44:11 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:38.210 02:44:11 -- host/auth.sh@68 -- # keyid=0 00:24:38.210 02:44:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:38.210 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.210 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.210 02:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.210 02:44:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.210 02:44:11 -- nvmf/common.sh@717 -- # local ip 00:24:38.210 02:44:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.210 02:44:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.210 02:44:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.210 02:44:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.210 02:44:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.210 02:44:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.210 02:44:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.210 02:44:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.210 02:44:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.210 02:44:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:38.210 02:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.210 02:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:38.780 nvme0n1 00:24:38.780 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.780 02:44:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.780 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.780 02:44:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.780 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:38.780 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.780 02:44:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.780 02:44:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.780 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.780 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:38.780 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.780 02:44:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.780 02:44:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:38.780 02:44:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.780 02:44:12 -- host/auth.sh@44 -- # digest=sha384 00:24:38.780 02:44:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.780 02:44:12 -- host/auth.sh@44 -- # keyid=1 00:24:38.780 02:44:12 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:38.780 02:44:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:38.780 02:44:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:38.780 02:44:12 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:38.780 02:44:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:38.780 02:44:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.780 02:44:12 -- host/auth.sh@68 -- # digest=sha384 00:24:38.780 02:44:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:38.781 02:44:12 -- host/auth.sh@68 -- # keyid=1 00:24:38.781 02:44:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:38.781 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.781 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:38.781 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.781 02:44:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.781 02:44:12 -- nvmf/common.sh@717 -- # local ip 00:24:38.781 02:44:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.781 02:44:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.781 02:44:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.781 02:44:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.781 02:44:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.781 02:44:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.781 02:44:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.781 02:44:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.781 02:44:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.781 02:44:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:38.781 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.781 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 nvme0n1 00:24:39.351 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.351 02:44:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.351 02:44:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.351 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.351 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.351 02:44:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.351 02:44:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.351 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.351 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.351 02:44:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.351 02:44:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:39.351 02:44:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.351 02:44:12 -- host/auth.sh@44 -- # digest=sha384 00:24:39.351 02:44:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.351 02:44:12 -- host/auth.sh@44 -- # keyid=2 00:24:39.351 02:44:12 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:39.351 02:44:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:39.351 02:44:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:39.351 02:44:12 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:39.351 02:44:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:39.351 02:44:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.351 02:44:12 -- host/auth.sh@68 -- # digest=sha384 00:24:39.351 02:44:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:39.351 02:44:12 -- host/auth.sh@68 -- # keyid=2 00:24:39.351 02:44:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:39.351 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.351 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 02:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.351 02:44:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.351 02:44:12 -- nvmf/common.sh@717 -- # local ip 00:24:39.351 02:44:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.351 02:44:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.351 02:44:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.351 02:44:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.351 02:44:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.351 02:44:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.351 02:44:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.351 02:44:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.351 02:44:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.351 02:44:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:39.351 02:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.351 02:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.611 nvme0n1 00:24:39.611 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.611 02:44:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.611 02:44:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.611 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.611 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.611 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.872 02:44:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.872 02:44:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.872 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.872 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.872 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.872 02:44:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.872 02:44:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:39.872 02:44:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.872 02:44:13 -- host/auth.sh@44 -- # digest=sha384 00:24:39.872 02:44:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.872 02:44:13 -- host/auth.sh@44 -- # keyid=3 00:24:39.872 02:44:13 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:39.872 02:44:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:39.872 02:44:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:39.872 02:44:13 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:39.872 02:44:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:39.872 02:44:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.872 02:44:13 -- host/auth.sh@68 -- # digest=sha384 00:24:39.872 02:44:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:39.872 02:44:13 -- host/auth.sh@68 -- # keyid=3 00:24:39.872 02:44:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:39.872 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.872 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.872 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.872 02:44:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.872 02:44:13 -- nvmf/common.sh@717 -- # local ip 00:24:39.872 02:44:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.872 02:44:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.872 02:44:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.872 02:44:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.872 02:44:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.872 02:44:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.872 02:44:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.872 02:44:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.872 02:44:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.872 02:44:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:39.872 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.872 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.133 nvme0n1 00:24:40.133 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.133 02:44:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.133 02:44:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.133 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.133 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.133 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.133 02:44:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.133 02:44:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.133 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.133 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.393 02:44:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.393 02:44:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:40.393 02:44:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.393 02:44:13 -- host/auth.sh@44 -- # digest=sha384 00:24:40.393 02:44:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.393 02:44:13 -- host/auth.sh@44 -- # keyid=4 00:24:40.393 02:44:13 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:40.393 02:44:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:40.393 02:44:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.393 02:44:13 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:40.393 02:44:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:40.393 02:44:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.393 02:44:13 -- host/auth.sh@68 -- # digest=sha384 00:24:40.393 02:44:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.393 02:44:13 -- host/auth.sh@68 -- # keyid=4 00:24:40.393 02:44:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:40.393 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.393 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.393 02:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.393 02:44:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.393 02:44:13 -- nvmf/common.sh@717 -- # local ip 00:24:40.393 02:44:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.393 02:44:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.394 02:44:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.394 02:44:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.394 02:44:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.394 02:44:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.394 02:44:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.394 02:44:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.394 02:44:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.394 02:44:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.394 02:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.394 02:44:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.655 nvme0n1 00:24:40.655 02:44:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.655 02:44:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.655 02:44:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.655 02:44:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.655 02:44:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.655 02:44:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.655 02:44:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.655 02:44:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.655 02:44:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.655 02:44:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.916 02:44:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.916 02:44:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.916 02:44:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.916 02:44:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:40.916 02:44:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.916 02:44:14 -- host/auth.sh@44 -- # digest=sha384 00:24:40.916 02:44:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.916 02:44:14 -- host/auth.sh@44 -- # keyid=0 00:24:40.916 02:44:14 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:40.916 02:44:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:40.916 02:44:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:40.916 02:44:14 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:40.916 02:44:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:40.916 02:44:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.916 02:44:14 -- host/auth.sh@68 -- # digest=sha384 00:24:40.916 02:44:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:40.916 02:44:14 -- host/auth.sh@68 -- # keyid=0 00:24:40.916 02:44:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:40.916 02:44:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.916 02:44:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.916 02:44:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.916 02:44:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.916 02:44:14 -- nvmf/common.sh@717 -- # local ip 00:24:40.916 02:44:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.916 02:44:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.916 02:44:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.916 02:44:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.916 02:44:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.916 02:44:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.916 02:44:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.916 02:44:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.916 02:44:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.916 02:44:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:40.916 02:44:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.916 02:44:14 -- common/autotest_common.sh@10 -- # set +x 00:24:41.486 nvme0n1 00:24:41.486 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.486 02:44:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.486 02:44:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.486 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.486 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:41.486 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.486 02:44:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.486 02:44:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.486 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.486 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:41.486 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.486 02:44:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.486 02:44:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:41.486 02:44:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.486 02:44:15 -- host/auth.sh@44 -- # digest=sha384 00:24:41.486 02:44:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.486 02:44:15 -- host/auth.sh@44 -- # keyid=1 00:24:41.486 02:44:15 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:41.486 02:44:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:41.486 02:44:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:41.486 02:44:15 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:41.486 02:44:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:41.486 02:44:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.486 02:44:15 -- host/auth.sh@68 -- # digest=sha384 00:24:41.486 02:44:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:41.486 02:44:15 -- host/auth.sh@68 -- # keyid=1 00:24:41.486 02:44:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:41.486 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.486 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:41.486 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.486 02:44:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.486 02:44:15 -- nvmf/common.sh@717 -- # local ip 00:24:41.486 02:44:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.486 02:44:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.486 02:44:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.486 02:44:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.486 02:44:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.486 02:44:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.486 02:44:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.486 02:44:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.486 02:44:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.486 02:44:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:41.486 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.486 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.428 nvme0n1 00:24:42.428 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.428 02:44:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.428 02:44:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.428 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.428 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.428 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.428 02:44:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.428 02:44:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.428 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.428 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.428 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.428 02:44:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.428 02:44:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:42.428 02:44:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.428 02:44:15 -- host/auth.sh@44 -- # digest=sha384 00:24:42.428 02:44:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.428 02:44:15 -- host/auth.sh@44 -- # keyid=2 00:24:42.428 02:44:15 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:42.428 02:44:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:42.428 02:44:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:42.428 02:44:15 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:42.428 02:44:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:42.428 02:44:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.428 02:44:15 -- host/auth.sh@68 -- # digest=sha384 00:24:42.428 02:44:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:42.428 02:44:15 -- host/auth.sh@68 -- # keyid=2 00:24:42.428 02:44:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:42.429 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.429 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.429 02:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.429 02:44:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.429 02:44:15 -- nvmf/common.sh@717 -- # local ip 00:24:42.429 02:44:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.429 02:44:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.429 02:44:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.429 02:44:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.429 02:44:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.429 02:44:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.429 02:44:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.429 02:44:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.429 02:44:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.429 02:44:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:42.429 02:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.429 02:44:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.999 nvme0n1 00:24:43.000 02:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.000 02:44:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.000 02:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.000 02:44:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.000 02:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:43.000 02:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.000 02:44:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.000 02:44:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.000 02:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.000 02:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:43.000 02:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.000 02:44:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.000 02:44:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:43.000 02:44:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.000 02:44:16 -- host/auth.sh@44 -- # digest=sha384 00:24:43.000 02:44:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.000 02:44:16 -- host/auth.sh@44 -- # keyid=3 00:24:43.000 02:44:16 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:43.000 02:44:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:43.000 02:44:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:43.000 02:44:16 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:43.000 02:44:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:43.000 02:44:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.000 02:44:16 -- host/auth.sh@68 -- # digest=sha384 00:24:43.000 02:44:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:43.000 02:44:16 -- host/auth.sh@68 -- # keyid=3 00:24:43.000 02:44:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:43.000 02:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.000 02:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:43.000 02:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.260 02:44:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.260 02:44:16 -- nvmf/common.sh@717 -- # local ip 00:24:43.260 02:44:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.260 02:44:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.260 02:44:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.260 02:44:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.260 02:44:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.260 02:44:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.260 02:44:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.260 02:44:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.260 02:44:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.260 02:44:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:43.260 02:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.260 02:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:43.832 nvme0n1 00:24:43.832 02:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.832 02:44:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.832 02:44:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.832 02:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.832 02:44:17 -- common/autotest_common.sh@10 -- # set +x 00:24:43.832 02:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.832 02:44:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.832 02:44:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.832 02:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.832 02:44:17 -- common/autotest_common.sh@10 -- # set +x 00:24:43.832 02:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.832 02:44:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.832 02:44:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:43.832 02:44:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.832 02:44:17 -- host/auth.sh@44 -- # digest=sha384 00:24:43.832 02:44:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.832 02:44:17 -- host/auth.sh@44 -- # keyid=4 00:24:43.832 02:44:17 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:43.832 02:44:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:43.832 02:44:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:43.832 02:44:17 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:43.832 02:44:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:43.832 02:44:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.832 02:44:17 -- host/auth.sh@68 -- # digest=sha384 00:24:43.832 02:44:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:43.832 02:44:17 -- host/auth.sh@68 -- # keyid=4 00:24:43.832 02:44:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:43.832 02:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.832 02:44:17 -- common/autotest_common.sh@10 -- # set +x 00:24:43.832 02:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.832 02:44:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.832 02:44:17 -- nvmf/common.sh@717 -- # local ip 00:24:43.832 02:44:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.832 02:44:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.832 02:44:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.832 02:44:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.832 02:44:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.832 02:44:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.832 02:44:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.832 02:44:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.832 02:44:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.832 02:44:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.832 02:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.832 02:44:17 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 nvme0n1 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.776 02:44:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:44.776 02:44:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.776 02:44:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.776 02:44:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:44.776 02:44:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.776 02:44:18 -- host/auth.sh@44 -- # digest=sha512 00:24:44.776 02:44:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.776 02:44:18 -- host/auth.sh@44 -- # keyid=0 00:24:44.776 02:44:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:44.776 02:44:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.776 02:44:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:44.776 02:44:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:44.776 02:44:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:44.776 02:44:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.776 02:44:18 -- host/auth.sh@68 -- # digest=sha512 00:24:44.776 02:44:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:44.776 02:44:18 -- host/auth.sh@68 -- # keyid=0 00:24:44.776 02:44:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.776 02:44:18 -- nvmf/common.sh@717 -- # local ip 00:24:44.776 02:44:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.776 02:44:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.776 02:44:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.776 02:44:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.776 02:44:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.776 02:44:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.776 02:44:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.776 02:44:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.776 02:44:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.776 02:44:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 nvme0n1 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.776 02:44:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.776 02:44:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:44.776 02:44:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.776 02:44:18 -- host/auth.sh@44 -- # digest=sha512 00:24:44.776 02:44:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.776 02:44:18 -- host/auth.sh@44 -- # keyid=1 00:24:44.776 02:44:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:44.776 02:44:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.776 02:44:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:44.776 02:44:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:44.776 02:44:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:44.776 02:44:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.776 02:44:18 -- host/auth.sh@68 -- # digest=sha512 00:24:44.776 02:44:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:44.776 02:44:18 -- host/auth.sh@68 -- # keyid=1 00:24:44.776 02:44:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:44.776 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.776 02:44:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.776 02:44:18 -- nvmf/common.sh@717 -- # local ip 00:24:44.776 02:44:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.776 02:44:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.776 02:44:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.776 02:44:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.776 02:44:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.776 02:44:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.776 02:44:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.776 02:44:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.776 02:44:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.776 02:44:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:44.776 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.776 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.038 nvme0n1 00:24:45.038 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.038 02:44:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.038 02:44:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.038 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.038 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.038 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.038 02:44:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.038 02:44:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.038 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.038 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.038 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.038 02:44:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.038 02:44:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:45.038 02:44:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.038 02:44:18 -- host/auth.sh@44 -- # digest=sha512 00:24:45.038 02:44:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.038 02:44:18 -- host/auth.sh@44 -- # keyid=2 00:24:45.038 02:44:18 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:45.038 02:44:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.038 02:44:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:45.038 02:44:18 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:45.038 02:44:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:45.038 02:44:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.038 02:44:18 -- host/auth.sh@68 -- # digest=sha512 00:24:45.038 02:44:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:45.038 02:44:18 -- host/auth.sh@68 -- # keyid=2 00:24:45.038 02:44:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:45.038 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.038 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.038 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.038 02:44:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.038 02:44:18 -- nvmf/common.sh@717 -- # local ip 00:24:45.038 02:44:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.038 02:44:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.038 02:44:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.038 02:44:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.038 02:44:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.038 02:44:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.038 02:44:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.038 02:44:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.038 02:44:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.038 02:44:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.038 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.038 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.299 nvme0n1 00:24:45.299 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.299 02:44:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.299 02:44:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.299 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.299 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.299 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.299 02:44:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.299 02:44:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.299 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.299 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.299 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.299 02:44:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.299 02:44:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:45.299 02:44:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.299 02:44:18 -- host/auth.sh@44 -- # digest=sha512 00:24:45.299 02:44:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.299 02:44:18 -- host/auth.sh@44 -- # keyid=3 00:24:45.299 02:44:18 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:45.299 02:44:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.299 02:44:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:45.299 02:44:18 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:45.299 02:44:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:45.299 02:44:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.299 02:44:18 -- host/auth.sh@68 -- # digest=sha512 00:24:45.299 02:44:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:45.299 02:44:18 -- host/auth.sh@68 -- # keyid=3 00:24:45.299 02:44:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:45.299 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.299 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.299 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.299 02:44:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.299 02:44:18 -- nvmf/common.sh@717 -- # local ip 00:24:45.299 02:44:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.299 02:44:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.299 02:44:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.299 02:44:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.299 02:44:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.300 02:44:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.300 02:44:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.300 02:44:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.300 02:44:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.300 02:44:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:45.300 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.300 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.300 nvme0n1 00:24:45.300 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.300 02:44:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.300 02:44:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.300 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.300 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.300 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.300 02:44:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.300 02:44:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.300 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.300 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.561 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.561 02:44:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.561 02:44:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:45.561 02:44:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.561 02:44:18 -- host/auth.sh@44 -- # digest=sha512 00:24:45.561 02:44:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.561 02:44:18 -- host/auth.sh@44 -- # keyid=4 00:24:45.561 02:44:18 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:45.561 02:44:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.561 02:44:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:45.561 02:44:18 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:45.561 02:44:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:45.561 02:44:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.561 02:44:18 -- host/auth.sh@68 -- # digest=sha512 00:24:45.561 02:44:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:45.561 02:44:18 -- host/auth.sh@68 -- # keyid=4 00:24:45.561 02:44:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:45.561 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.561 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.561 02:44:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.561 02:44:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.561 02:44:18 -- nvmf/common.sh@717 -- # local ip 00:24:45.561 02:44:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.561 02:44:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.561 02:44:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.561 02:44:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.561 02:44:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.561 02:44:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.561 02:44:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.561 02:44:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.561 02:44:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.561 02:44:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.561 02:44:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.561 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.561 nvme0n1 00:24:45.561 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.561 02:44:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.561 02:44:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.561 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.561 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.561 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.561 02:44:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.561 02:44:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.561 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.561 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.561 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.561 02:44:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.561 02:44:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.561 02:44:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:45.561 02:44:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.561 02:44:19 -- host/auth.sh@44 -- # digest=sha512 00:24:45.561 02:44:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.561 02:44:19 -- host/auth.sh@44 -- # keyid=0 00:24:45.561 02:44:19 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:45.561 02:44:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.561 02:44:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:45.561 02:44:19 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:45.561 02:44:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:45.561 02:44:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.561 02:44:19 -- host/auth.sh@68 -- # digest=sha512 00:24:45.561 02:44:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:45.561 02:44:19 -- host/auth.sh@68 -- # keyid=0 00:24:45.561 02:44:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:45.561 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.561 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.561 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.561 02:44:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.561 02:44:19 -- nvmf/common.sh@717 -- # local ip 00:24:45.561 02:44:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.561 02:44:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.561 02:44:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.561 02:44:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.561 02:44:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.561 02:44:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.561 02:44:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.561 02:44:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.561 02:44:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.561 02:44:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:45.561 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.561 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.822 nvme0n1 00:24:45.822 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.822 02:44:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.822 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.822 02:44:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.822 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.822 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.822 02:44:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.822 02:44:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.822 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.822 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.822 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.822 02:44:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.822 02:44:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:45.822 02:44:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.822 02:44:19 -- host/auth.sh@44 -- # digest=sha512 00:24:45.822 02:44:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:45.822 02:44:19 -- host/auth.sh@44 -- # keyid=1 00:24:45.822 02:44:19 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:45.822 02:44:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.822 02:44:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:45.822 02:44:19 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:45.822 02:44:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:45.822 02:44:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.822 02:44:19 -- host/auth.sh@68 -- # digest=sha512 00:24:45.822 02:44:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:45.822 02:44:19 -- host/auth.sh@68 -- # keyid=1 00:24:45.822 02:44:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:45.822 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.822 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:45.822 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.822 02:44:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.822 02:44:19 -- nvmf/common.sh@717 -- # local ip 00:24:45.822 02:44:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.822 02:44:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.822 02:44:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.823 02:44:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.823 02:44:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.823 02:44:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.823 02:44:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.823 02:44:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.823 02:44:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.823 02:44:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:45.823 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.823 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.084 nvme0n1 00:24:46.084 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.084 02:44:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.084 02:44:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.084 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.084 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.084 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.084 02:44:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.084 02:44:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.084 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.084 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.084 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.084 02:44:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:46.084 02:44:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:46.084 02:44:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.084 02:44:19 -- host/auth.sh@44 -- # digest=sha512 00:24:46.084 02:44:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.084 02:44:19 -- host/auth.sh@44 -- # keyid=2 00:24:46.084 02:44:19 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:46.084 02:44:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:46.084 02:44:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:46.084 02:44:19 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:46.084 02:44:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:46.084 02:44:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:46.084 02:44:19 -- host/auth.sh@68 -- # digest=sha512 00:24:46.084 02:44:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:46.084 02:44:19 -- host/auth.sh@68 -- # keyid=2 00:24:46.084 02:44:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:46.084 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.084 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.084 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.084 02:44:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:46.084 02:44:19 -- nvmf/common.sh@717 -- # local ip 00:24:46.084 02:44:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.084 02:44:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.084 02:44:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.084 02:44:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.084 02:44:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.084 02:44:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.084 02:44:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.084 02:44:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.084 02:44:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.084 02:44:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.084 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.084 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.345 nvme0n1 00:24:46.345 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.345 02:44:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.345 02:44:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.345 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.345 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.345 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.345 02:44:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.345 02:44:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.345 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.345 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.345 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.345 02:44:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:46.345 02:44:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:46.345 02:44:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.345 02:44:19 -- host/auth.sh@44 -- # digest=sha512 00:24:46.345 02:44:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.345 02:44:19 -- host/auth.sh@44 -- # keyid=3 00:24:46.345 02:44:19 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:46.345 02:44:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:46.345 02:44:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:46.345 02:44:19 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:46.345 02:44:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:46.345 02:44:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:46.345 02:44:19 -- host/auth.sh@68 -- # digest=sha512 00:24:46.345 02:44:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:46.345 02:44:19 -- host/auth.sh@68 -- # keyid=3 00:24:46.345 02:44:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:46.345 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.345 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.345 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.345 02:44:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:46.345 02:44:19 -- nvmf/common.sh@717 -- # local ip 00:24:46.345 02:44:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.345 02:44:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.345 02:44:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.346 02:44:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.346 02:44:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.346 02:44:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.346 02:44:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.346 02:44:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.346 02:44:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.346 02:44:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:46.346 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.346 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 nvme0n1 00:24:46.607 02:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.607 02:44:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.607 02:44:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.607 02:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.607 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.607 02:44:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.607 02:44:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.607 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.607 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.607 02:44:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:46.607 02:44:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:46.607 02:44:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.607 02:44:20 -- host/auth.sh@44 -- # digest=sha512 00:24:46.607 02:44:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.607 02:44:20 -- host/auth.sh@44 -- # keyid=4 00:24:46.607 02:44:20 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:46.607 02:44:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:46.607 02:44:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:46.607 02:44:20 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:46.607 02:44:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:46.607 02:44:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:46.607 02:44:20 -- host/auth.sh@68 -- # digest=sha512 00:24:46.607 02:44:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:46.607 02:44:20 -- host/auth.sh@68 -- # keyid=4 00:24:46.607 02:44:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:46.607 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.607 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.607 02:44:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:46.607 02:44:20 -- nvmf/common.sh@717 -- # local ip 00:24:46.607 02:44:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.607 02:44:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.607 02:44:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.607 02:44:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.607 02:44:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.607 02:44:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.607 02:44:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.607 02:44:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.607 02:44:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.607 02:44:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.607 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.607 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 nvme0n1 00:24:46.607 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.868 02:44:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.868 02:44:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.868 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.868 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.868 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.868 02:44:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.868 02:44:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.868 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.868 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.868 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.868 02:44:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.868 02:44:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:46.868 02:44:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:46.868 02:44:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.868 02:44:20 -- host/auth.sh@44 -- # digest=sha512 00:24:46.868 02:44:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.868 02:44:20 -- host/auth.sh@44 -- # keyid=0 00:24:46.868 02:44:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:46.868 02:44:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:46.868 02:44:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:46.868 02:44:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:46.868 02:44:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:46.868 02:44:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:46.868 02:44:20 -- host/auth.sh@68 -- # digest=sha512 00:24:46.868 02:44:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:46.868 02:44:20 -- host/auth.sh@68 -- # keyid=0 00:24:46.868 02:44:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:46.868 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.868 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:46.868 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.868 02:44:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:46.868 02:44:20 -- nvmf/common.sh@717 -- # local ip 00:24:46.868 02:44:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.869 02:44:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.869 02:44:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.869 02:44:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.869 02:44:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.869 02:44:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.869 02:44:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.869 02:44:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.869 02:44:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.869 02:44:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:46.869 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.869 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.130 nvme0n1 00:24:47.130 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.130 02:44:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.130 02:44:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:47.130 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.130 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.130 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.130 02:44:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.130 02:44:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.130 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.130 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.130 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.130 02:44:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:47.130 02:44:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:47.130 02:44:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:47.130 02:44:20 -- host/auth.sh@44 -- # digest=sha512 00:24:47.130 02:44:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.130 02:44:20 -- host/auth.sh@44 -- # keyid=1 00:24:47.130 02:44:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:47.130 02:44:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:47.130 02:44:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:47.130 02:44:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:47.130 02:44:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:47.130 02:44:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:47.130 02:44:20 -- host/auth.sh@68 -- # digest=sha512 00:24:47.130 02:44:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:47.130 02:44:20 -- host/auth.sh@68 -- # keyid=1 00:24:47.130 02:44:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:47.130 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.130 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.130 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.130 02:44:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:47.130 02:44:20 -- nvmf/common.sh@717 -- # local ip 00:24:47.130 02:44:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.130 02:44:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.130 02:44:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.130 02:44:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.130 02:44:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.130 02:44:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.130 02:44:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.130 02:44:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.130 02:44:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.130 02:44:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:47.130 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.130 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 nvme0n1 00:24:47.391 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.391 02:44:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.391 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.391 02:44:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:47.391 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.391 02:44:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.391 02:44:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.391 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.391 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.391 02:44:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:47.391 02:44:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:47.391 02:44:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:47.391 02:44:20 -- host/auth.sh@44 -- # digest=sha512 00:24:47.391 02:44:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.391 02:44:20 -- host/auth.sh@44 -- # keyid=2 00:24:47.391 02:44:20 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:47.391 02:44:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:47.391 02:44:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:47.391 02:44:20 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:47.391 02:44:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:47.391 02:44:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:47.391 02:44:20 -- host/auth.sh@68 -- # digest=sha512 00:24:47.391 02:44:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:47.391 02:44:20 -- host/auth.sh@68 -- # keyid=2 00:24:47.391 02:44:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:47.391 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.391 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 02:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.391 02:44:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:47.391 02:44:20 -- nvmf/common.sh@717 -- # local ip 00:24:47.391 02:44:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.391 02:44:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.391 02:44:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.391 02:44:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.391 02:44:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.391 02:44:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.391 02:44:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.391 02:44:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.391 02:44:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.391 02:44:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.391 02:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.391 02:44:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.652 nvme0n1 00:24:47.652 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.652 02:44:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.652 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.652 02:44:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:47.652 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:47.652 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.652 02:44:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.652 02:44:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.652 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.652 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:47.652 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.652 02:44:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:47.652 02:44:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:47.652 02:44:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:47.913 02:44:21 -- host/auth.sh@44 -- # digest=sha512 00:24:47.913 02:44:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.913 02:44:21 -- host/auth.sh@44 -- # keyid=3 00:24:47.913 02:44:21 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:47.913 02:44:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:47.913 02:44:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:47.913 02:44:21 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:47.913 02:44:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:47.913 02:44:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:47.913 02:44:21 -- host/auth.sh@68 -- # digest=sha512 00:24:47.913 02:44:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:47.913 02:44:21 -- host/auth.sh@68 -- # keyid=3 00:24:47.913 02:44:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:47.913 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.913 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:47.913 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.913 02:44:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:47.913 02:44:21 -- nvmf/common.sh@717 -- # local ip 00:24:47.913 02:44:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.913 02:44:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.913 02:44:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.913 02:44:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.913 02:44:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.913 02:44:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.913 02:44:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.913 02:44:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.913 02:44:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.913 02:44:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:47.913 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.913 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.173 nvme0n1 00:24:48.173 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.173 02:44:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.173 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.173 02:44:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:48.173 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.173 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.173 02:44:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.173 02:44:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.173 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.173 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.173 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.173 02:44:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:48.173 02:44:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:48.173 02:44:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:48.173 02:44:21 -- host/auth.sh@44 -- # digest=sha512 00:24:48.173 02:44:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.174 02:44:21 -- host/auth.sh@44 -- # keyid=4 00:24:48.174 02:44:21 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:48.174 02:44:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:48.174 02:44:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:48.174 02:44:21 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:48.174 02:44:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:48.174 02:44:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:48.174 02:44:21 -- host/auth.sh@68 -- # digest=sha512 00:24:48.174 02:44:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:48.174 02:44:21 -- host/auth.sh@68 -- # keyid=4 00:24:48.174 02:44:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.174 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.174 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.174 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.174 02:44:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:48.174 02:44:21 -- nvmf/common.sh@717 -- # local ip 00:24:48.174 02:44:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:48.174 02:44:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:48.174 02:44:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.174 02:44:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.174 02:44:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:48.174 02:44:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.174 02:44:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:48.174 02:44:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:48.174 02:44:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:48.174 02:44:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.174 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.174 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.435 nvme0n1 00:24:48.435 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.435 02:44:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.435 02:44:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:48.435 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.435 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.435 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.435 02:44:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.435 02:44:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.435 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.435 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.435 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.435 02:44:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.435 02:44:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:48.435 02:44:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:48.435 02:44:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:48.435 02:44:21 -- host/auth.sh@44 -- # digest=sha512 00:24:48.435 02:44:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.435 02:44:21 -- host/auth.sh@44 -- # keyid=0 00:24:48.435 02:44:21 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:48.435 02:44:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:48.435 02:44:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:48.435 02:44:21 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:48.435 02:44:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:48.435 02:44:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:48.435 02:44:21 -- host/auth.sh@68 -- # digest=sha512 00:24:48.435 02:44:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:48.435 02:44:21 -- host/auth.sh@68 -- # keyid=0 00:24:48.435 02:44:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:48.435 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.435 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.435 02:44:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.435 02:44:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:48.435 02:44:21 -- nvmf/common.sh@717 -- # local ip 00:24:48.435 02:44:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:48.435 02:44:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:48.435 02:44:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.435 02:44:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.435 02:44:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:48.435 02:44:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.435 02:44:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:48.435 02:44:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:48.435 02:44:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:48.435 02:44:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:48.435 02:44:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.435 02:44:21 -- common/autotest_common.sh@10 -- # set +x 00:24:49.008 nvme0n1 00:24:49.008 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.008 02:44:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.008 02:44:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.008 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.008 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.008 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.008 02:44:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.008 02:44:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.008 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.008 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.008 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.008 02:44:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.008 02:44:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:49.008 02:44:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.008 02:44:22 -- host/auth.sh@44 -- # digest=sha512 00:24:49.008 02:44:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.008 02:44:22 -- host/auth.sh@44 -- # keyid=1 00:24:49.008 02:44:22 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:49.008 02:44:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:49.008 02:44:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:49.008 02:44:22 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:49.008 02:44:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:49.008 02:44:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.008 02:44:22 -- host/auth.sh@68 -- # digest=sha512 00:24:49.008 02:44:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:49.008 02:44:22 -- host/auth.sh@68 -- # keyid=1 00:24:49.008 02:44:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:49.008 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.008 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.008 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.008 02:44:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.008 02:44:22 -- nvmf/common.sh@717 -- # local ip 00:24:49.008 02:44:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.008 02:44:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.008 02:44:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.008 02:44:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.008 02:44:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.008 02:44:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.008 02:44:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.008 02:44:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.008 02:44:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.008 02:44:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:49.008 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.008 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.269 nvme0n1 00:24:49.269 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.269 02:44:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.269 02:44:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.269 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.269 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.528 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.528 02:44:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.528 02:44:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.528 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.528 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.528 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.528 02:44:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.528 02:44:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:49.528 02:44:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.528 02:44:22 -- host/auth.sh@44 -- # digest=sha512 00:24:49.528 02:44:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.528 02:44:22 -- host/auth.sh@44 -- # keyid=2 00:24:49.528 02:44:22 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:49.528 02:44:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:49.528 02:44:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:49.528 02:44:22 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:49.528 02:44:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:49.528 02:44:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.528 02:44:22 -- host/auth.sh@68 -- # digest=sha512 00:24:49.528 02:44:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:49.528 02:44:22 -- host/auth.sh@68 -- # keyid=2 00:24:49.528 02:44:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:49.528 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.528 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.528 02:44:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.528 02:44:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.528 02:44:22 -- nvmf/common.sh@717 -- # local ip 00:24:49.528 02:44:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.528 02:44:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.528 02:44:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.528 02:44:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.528 02:44:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.528 02:44:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.528 02:44:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.528 02:44:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.528 02:44:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.528 02:44:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:49.528 02:44:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.528 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.788 nvme0n1 00:24:49.788 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.788 02:44:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.788 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.788 02:44:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.788 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.788 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.788 02:44:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.788 02:44:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.788 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.788 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.788 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.788 02:44:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.788 02:44:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:49.788 02:44:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.788 02:44:23 -- host/auth.sh@44 -- # digest=sha512 00:24:49.788 02:44:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.788 02:44:23 -- host/auth.sh@44 -- # keyid=3 00:24:49.788 02:44:23 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:49.788 02:44:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:49.788 02:44:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:49.788 02:44:23 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:49.788 02:44:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:49.788 02:44:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.788 02:44:23 -- host/auth.sh@68 -- # digest=sha512 00:24:49.788 02:44:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:49.788 02:44:23 -- host/auth.sh@68 -- # keyid=3 00:24:49.788 02:44:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:49.788 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.788 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:49.788 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.788 02:44:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.788 02:44:23 -- nvmf/common.sh@717 -- # local ip 00:24:49.788 02:44:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.788 02:44:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.788 02:44:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.788 02:44:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.788 02:44:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.788 02:44:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.788 02:44:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.788 02:44:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.788 02:44:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.788 02:44:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:49.788 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.788 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.359 nvme0n1 00:24:50.359 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 02:44:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.359 02:44:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.359 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.359 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.359 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 02:44:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.359 02:44:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.359 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.359 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.359 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 02:44:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.359 02:44:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:50.359 02:44:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.359 02:44:23 -- host/auth.sh@44 -- # digest=sha512 00:24:50.359 02:44:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.359 02:44:23 -- host/auth.sh@44 -- # keyid=4 00:24:50.359 02:44:23 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:50.359 02:44:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:50.359 02:44:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:50.359 02:44:23 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:50.359 02:44:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:50.359 02:44:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.359 02:44:23 -- host/auth.sh@68 -- # digest=sha512 00:24:50.359 02:44:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:50.359 02:44:23 -- host/auth.sh@68 -- # keyid=4 00:24:50.359 02:44:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.359 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.359 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.359 02:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 02:44:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.359 02:44:23 -- nvmf/common.sh@717 -- # local ip 00:24:50.359 02:44:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.359 02:44:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.359 02:44:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.359 02:44:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.359 02:44:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.359 02:44:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.359 02:44:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.359 02:44:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.359 02:44:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.359 02:44:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.359 02:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.359 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 nvme0n1 00:24:50.930 02:44:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.930 02:44:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.930 02:44:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.930 02:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.930 02:44:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 02:44:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.930 02:44:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.930 02:44:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.930 02:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.930 02:44:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 02:44:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.930 02:44:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.930 02:44:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.930 02:44:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:50.930 02:44:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.930 02:44:24 -- host/auth.sh@44 -- # digest=sha512 00:24:50.930 02:44:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.930 02:44:24 -- host/auth.sh@44 -- # keyid=0 00:24:50.930 02:44:24 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:50.930 02:44:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:50.930 02:44:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:50.930 02:44:24 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmU5OTNmNmRkMmQxNDU3OGZmNjU5NzNmNzE0MWQ2ZjKSfwI3: 00:24:50.930 02:44:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:50.930 02:44:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.930 02:44:24 -- host/auth.sh@68 -- # digest=sha512 00:24:50.930 02:44:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:50.930 02:44:24 -- host/auth.sh@68 -- # keyid=0 00:24:50.930 02:44:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:50.930 02:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.930 02:44:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 02:44:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.930 02:44:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.930 02:44:24 -- nvmf/common.sh@717 -- # local ip 00:24:50.930 02:44:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.930 02:44:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.930 02:44:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.930 02:44:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.930 02:44:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.930 02:44:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.930 02:44:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.930 02:44:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.930 02:44:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.930 02:44:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:50.930 02:44:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.930 02:44:24 -- common/autotest_common.sh@10 -- # set +x 00:24:51.503 nvme0n1 00:24:51.503 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.503 02:44:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.503 02:44:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.503 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.503 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.503 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.503 02:44:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.503 02:44:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.503 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.503 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.503 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.503 02:44:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.503 02:44:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:51.503 02:44:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.503 02:44:25 -- host/auth.sh@44 -- # digest=sha512 00:24:51.503 02:44:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.503 02:44:25 -- host/auth.sh@44 -- # keyid=1 00:24:51.503 02:44:25 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:51.503 02:44:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:51.503 02:44:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:51.503 02:44:25 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:51.503 02:44:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:51.503 02:44:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.503 02:44:25 -- host/auth.sh@68 -- # digest=sha512 00:24:51.503 02:44:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:51.503 02:44:25 -- host/auth.sh@68 -- # keyid=1 00:24:51.503 02:44:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:51.503 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.503 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.503 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.503 02:44:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.503 02:44:25 -- nvmf/common.sh@717 -- # local ip 00:24:51.503 02:44:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.503 02:44:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.503 02:44:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.503 02:44:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.503 02:44:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.503 02:44:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.503 02:44:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.503 02:44:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.503 02:44:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.503 02:44:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:51.503 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.503 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 nvme0n1 00:24:52.446 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 02:44:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.446 02:44:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.446 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 02:44:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.446 02:44:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.446 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 02:44:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.446 02:44:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:52.446 02:44:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.446 02:44:25 -- host/auth.sh@44 -- # digest=sha512 00:24:52.446 02:44:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.446 02:44:25 -- host/auth.sh@44 -- # keyid=2 00:24:52.446 02:44:25 -- host/auth.sh@45 -- # key=DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:52.446 02:44:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:52.446 02:44:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:52.446 02:44:25 -- host/auth.sh@49 -- # echo DHHC-1:01:OGJhNWUxMjNiMzgxZmFlMDQ1YzY5YjI3YWMwZGI4NWMVW2X/: 00:24:52.446 02:44:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:52.446 02:44:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.446 02:44:25 -- host/auth.sh@68 -- # digest=sha512 00:24:52.446 02:44:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:52.446 02:44:25 -- host/auth.sh@68 -- # keyid=2 00:24:52.446 02:44:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.446 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 02:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 02:44:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.446 02:44:25 -- nvmf/common.sh@717 -- # local ip 00:24:52.446 02:44:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.446 02:44:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.446 02:44:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.446 02:44:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.446 02:44:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.446 02:44:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.446 02:44:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.446 02:44:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.446 02:44:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.446 02:44:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:52.446 02:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 02:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:53.018 nvme0n1 00:24:53.018 02:44:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.018 02:44:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.018 02:44:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.018 02:44:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.018 02:44:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.279 02:44:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.279 02:44:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.279 02:44:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.279 02:44:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.279 02:44:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.279 02:44:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.279 02:44:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.279 02:44:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:53.279 02:44:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.279 02:44:26 -- host/auth.sh@44 -- # digest=sha512 00:24:53.279 02:44:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.279 02:44:26 -- host/auth.sh@44 -- # keyid=3 00:24:53.279 02:44:26 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:53.279 02:44:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:53.279 02:44:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:53.279 02:44:26 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTYzNzRlMzdjMTQyZjc4ZTZlYmQ2OTc3MjFhZWYwZTM4NmVmMjI5YjQ4YjI0MmYySNMBZw==: 00:24:53.279 02:44:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:53.279 02:44:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.279 02:44:26 -- host/auth.sh@68 -- # digest=sha512 00:24:53.279 02:44:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:53.279 02:44:26 -- host/auth.sh@68 -- # keyid=3 00:24:53.279 02:44:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.279 02:44:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.280 02:44:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.280 02:44:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.280 02:44:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.280 02:44:26 -- nvmf/common.sh@717 -- # local ip 00:24:53.280 02:44:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.280 02:44:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.280 02:44:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.280 02:44:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.280 02:44:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.280 02:44:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.280 02:44:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.280 02:44:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.280 02:44:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.280 02:44:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:53.280 02:44:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.280 02:44:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.852 nvme0n1 00:24:53.852 02:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.852 02:44:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.852 02:44:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.852 02:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.852 02:44:27 -- common/autotest_common.sh@10 -- # set +x 00:24:53.852 02:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.113 02:44:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.113 02:44:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.113 02:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.113 02:44:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.113 02:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.113 02:44:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:54.113 02:44:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:54.113 02:44:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:54.113 02:44:27 -- host/auth.sh@44 -- # digest=sha512 00:24:54.113 02:44:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.113 02:44:27 -- host/auth.sh@44 -- # keyid=4 00:24:54.113 02:44:27 -- host/auth.sh@45 -- # key=DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:54.113 02:44:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:54.113 02:44:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:54.113 02:44:27 -- host/auth.sh@49 -- # echo DHHC-1:03:NWY1NTgwMzU4OWExYTQ2MGU0YTg3ZGFkNjM4OTYxMzhmNjBjYTA0ZjgzMmU0ODg2YTQ0YWJlYmZiMjViNzk1OAhGbrc=: 00:24:54.113 02:44:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:54.113 02:44:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:54.113 02:44:27 -- host/auth.sh@68 -- # digest=sha512 00:24:54.113 02:44:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:54.113 02:44:27 -- host/auth.sh@68 -- # keyid=4 00:24:54.113 02:44:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.113 02:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.113 02:44:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.113 02:44:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.113 02:44:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:54.113 02:44:27 -- nvmf/common.sh@717 -- # local ip 00:24:54.113 02:44:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.113 02:44:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.113 02:44:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.113 02:44:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.113 02:44:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.113 02:44:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.113 02:44:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.113 02:44:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.113 02:44:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.113 02:44:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.113 02:44:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.113 02:44:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.684 nvme0n1 00:24:54.684 02:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.684 02:44:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.684 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.684 02:44:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:54.684 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.684 02:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.684 02:44:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.684 02:44:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.684 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.684 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.684 02:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.684 02:44:28 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:54.684 02:44:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:54.684 02:44:28 -- host/auth.sh@44 -- # digest=sha256 00:24:54.684 02:44:28 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.684 02:44:28 -- host/auth.sh@44 -- # keyid=1 00:24:54.684 02:44:28 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:54.684 02:44:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:54.684 02:44:28 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:54.684 02:44:28 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDdjODc3NTcxNWY1YjgzYjUxMWM5MTlmNzIxMTk4YzEyZGRlMjQ3N2I1NjIwMzdi9N8vVg==: 00:24:54.684 02:44:28 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:54.684 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.684 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.684 02:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.684 02:44:28 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:54.684 02:44:28 -- nvmf/common.sh@717 -- # local ip 00:24:54.684 02:44:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.684 02:44:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.684 02:44:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.684 02:44:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.684 02:44:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.684 02:44:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.684 02:44:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.684 02:44:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.945 02:44:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.945 02:44:28 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:54.945 02:44:28 -- common/autotest_common.sh@638 -- # local es=0 00:24:54.945 02:44:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:54.945 02:44:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:54.945 02:44:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:54.945 02:44:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:54.945 02:44:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:54.945 02:44:28 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:54.945 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.945 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.945 request: 00:24:54.945 { 00:24:54.945 "name": "nvme0", 00:24:54.945 "trtype": "tcp", 00:24:54.945 "traddr": "10.0.0.1", 00:24:54.946 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:54.946 "adrfam": "ipv4", 00:24:54.946 "trsvcid": "4420", 00:24:54.946 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:54.946 "method": "bdev_nvme_attach_controller", 00:24:54.946 "req_id": 1 00:24:54.946 } 00:24:54.946 Got JSON-RPC error response 00:24:54.946 response: 00:24:54.946 { 00:24:54.946 "code": -32602, 00:24:54.946 "message": "Invalid parameters" 00:24:54.946 } 00:24:54.946 02:44:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:54.946 02:44:28 -- common/autotest_common.sh@641 -- # es=1 00:24:54.946 02:44:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:54.946 02:44:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:54.946 02:44:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:54.946 02:44:28 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.946 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.946 02:44:28 -- host/auth.sh@121 -- # jq length 00:24:54.946 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.946 02:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.946 02:44:28 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:54.946 02:44:28 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:54.946 02:44:28 -- nvmf/common.sh@717 -- # local ip 00:24:54.946 02:44:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.946 02:44:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.946 02:44:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.946 02:44:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.946 02:44:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.946 02:44:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.946 02:44:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.946 02:44:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.946 02:44:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.946 02:44:28 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.946 02:44:28 -- common/autotest_common.sh@638 -- # local es=0 00:24:54.946 02:44:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.946 02:44:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:54.946 02:44:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:54.946 02:44:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:54.946 02:44:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:54.946 02:44:28 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.946 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.946 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.946 request: 00:24:54.946 { 00:24:54.946 "name": "nvme0", 00:24:54.946 "trtype": "tcp", 00:24:54.946 "traddr": "10.0.0.1", 00:24:54.946 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:54.946 "adrfam": "ipv4", 00:24:54.946 "trsvcid": "4420", 00:24:54.946 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:54.946 "dhchap_key": "key2", 00:24:54.946 "method": "bdev_nvme_attach_controller", 00:24:54.946 "req_id": 1 00:24:54.946 } 00:24:54.946 Got JSON-RPC error response 00:24:54.946 response: 00:24:54.946 { 00:24:54.946 "code": -32602, 00:24:54.946 "message": "Invalid parameters" 00:24:54.946 } 00:24:54.946 02:44:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:54.946 02:44:28 -- common/autotest_common.sh@641 -- # es=1 00:24:54.946 02:44:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:54.946 02:44:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:54.946 02:44:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:54.946 02:44:28 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.946 02:44:28 -- host/auth.sh@127 -- # jq length 00:24:54.946 02:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.946 02:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:54.946 02:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.946 02:44:28 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:54.946 02:44:28 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:54.946 02:44:28 -- host/auth.sh@130 -- # cleanup 00:24:54.946 02:44:28 -- host/auth.sh@24 -- # nvmftestfini 00:24:54.946 02:44:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:54.946 02:44:28 -- nvmf/common.sh@117 -- # sync 00:24:54.946 02:44:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.946 02:44:28 -- nvmf/common.sh@120 -- # set +e 00:24:54.946 02:44:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.946 02:44:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.946 rmmod nvme_tcp 00:24:54.946 rmmod nvme_fabrics 00:24:54.946 02:44:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.946 02:44:28 -- nvmf/common.sh@124 -- # set -e 00:24:54.946 02:44:28 -- nvmf/common.sh@125 -- # return 0 00:24:54.946 02:44:28 -- nvmf/common.sh@478 -- # '[' -n 246659 ']' 00:24:54.946 02:44:28 -- nvmf/common.sh@479 -- # killprocess 246659 00:24:54.946 02:44:28 -- common/autotest_common.sh@936 -- # '[' -z 246659 ']' 00:24:54.946 02:44:28 -- common/autotest_common.sh@940 -- # kill -0 246659 00:24:54.946 02:44:28 -- common/autotest_common.sh@941 -- # uname 00:24:54.946 02:44:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.946 02:44:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 246659 00:24:55.207 02:44:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:55.207 02:44:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:55.207 02:44:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 246659' 00:24:55.207 killing process with pid 246659 00:24:55.207 02:44:28 -- common/autotest_common.sh@955 -- # kill 246659 00:24:55.207 02:44:28 -- common/autotest_common.sh@960 -- # wait 246659 00:24:55.207 02:44:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:55.207 02:44:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:55.207 02:44:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:55.207 02:44:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.207 02:44:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.207 02:44:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.207 02:44:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.207 02:44:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.754 02:44:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.754 02:44:30 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:57.754 02:44:30 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:57.754 02:44:30 -- host/auth.sh@27 -- # clean_kernel_target 00:24:57.754 02:44:30 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:57.754 02:44:30 -- nvmf/common.sh@675 -- # echo 0 00:24:57.754 02:44:30 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.754 02:44:30 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:57.754 02:44:30 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:57.754 02:44:30 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.754 02:44:30 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:57.754 02:44:30 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:57.754 02:44:30 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:01.057 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:01.057 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:25:01.057 02:44:34 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oBl /tmp/spdk.key-null.3y7 /tmp/spdk.key-sha256.iod /tmp/spdk.key-sha384.4vZ /tmp/spdk.key-sha512.yLs /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:01.057 02:44:34 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:04.361 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:25:04.361 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:04.361 00:25:04.361 real 0m53.423s 00:25:04.361 user 0m47.362s 00:25:04.361 sys 0m13.569s 00:25:04.361 02:44:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:04.361 02:44:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.361 ************************************ 00:25:04.361 END TEST nvmf_auth 00:25:04.361 ************************************ 00:25:04.361 02:44:37 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:25:04.361 02:44:37 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:04.361 02:44:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:04.361 02:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:04.361 02:44:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.361 ************************************ 00:25:04.361 START TEST nvmf_digest 00:25:04.361 ************************************ 00:25:04.361 02:44:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:04.361 * Looking for test storage... 00:25:04.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.361 02:44:37 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.361 02:44:37 -- nvmf/common.sh@7 -- # uname -s 00:25:04.361 02:44:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.361 02:44:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.361 02:44:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.361 02:44:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.361 02:44:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.361 02:44:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.361 02:44:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.361 02:44:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.361 02:44:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.361 02:44:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.362 02:44:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:04.362 02:44:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:04.362 02:44:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.362 02:44:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.362 02:44:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.362 02:44:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.362 02:44:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.362 02:44:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.362 02:44:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.362 02:44:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.362 02:44:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.362 02:44:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.362 02:44:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.362 02:44:37 -- paths/export.sh@5 -- # export PATH 00:25:04.362 02:44:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.362 02:44:37 -- nvmf/common.sh@47 -- # : 0 00:25:04.362 02:44:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.362 02:44:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.362 02:44:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.362 02:44:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.362 02:44:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.362 02:44:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.362 02:44:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.362 02:44:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.362 02:44:37 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:04.362 02:44:37 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:04.362 02:44:37 -- host/digest.sh@16 -- # runtime=2 00:25:04.362 02:44:37 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:04.362 02:44:37 -- host/digest.sh@138 -- # nvmftestinit 00:25:04.362 02:44:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:04.362 02:44:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.362 02:44:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:04.362 02:44:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:04.362 02:44:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:04.362 02:44:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.362 02:44:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.362 02:44:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.362 02:44:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:04.362 02:44:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:04.362 02:44:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.362 02:44:37 -- common/autotest_common.sh@10 -- # set +x 00:25:10.953 02:44:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:10.953 02:44:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.953 02:44:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.953 02:44:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.953 02:44:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.953 02:44:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.953 02:44:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.953 02:44:44 -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.953 02:44:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.953 02:44:44 -- nvmf/common.sh@296 -- # e810=() 00:25:10.953 02:44:44 -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.953 02:44:44 -- nvmf/common.sh@297 -- # x722=() 00:25:10.953 02:44:44 -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.953 02:44:44 -- nvmf/common.sh@298 -- # mlx=() 00:25:10.953 02:44:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.953 02:44:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.953 02:44:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.953 02:44:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.953 02:44:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.953 02:44:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.953 02:44:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:10.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:10.953 02:44:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.953 02:44:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:10.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:10.953 02:44:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.953 02:44:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.953 02:44:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.953 02:44:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.953 02:44:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.953 02:44:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.953 02:44:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:10.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:10.953 02:44:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.953 02:44:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.953 02:44:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.953 02:44:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.953 02:44:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.954 02:44:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:10.954 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:10.954 02:44:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.954 02:44:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:10.954 02:44:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:10.954 02:44:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:10.954 02:44:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:10.954 02:44:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:10.954 02:44:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.954 02:44:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.954 02:44:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.954 02:44:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.954 02:44:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.954 02:44:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.954 02:44:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.954 02:44:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.954 02:44:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.954 02:44:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.954 02:44:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.954 02:44:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.954 02:44:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.954 02:44:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.954 02:44:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.954 02:44:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.954 02:44:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.215 02:44:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.215 02:44:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.215 02:44:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:11.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:25:11.215 00:25:11.215 --- 10.0.0.2 ping statistics --- 00:25:11.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.215 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:25:11.215 02:44:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:25:11.215 00:25:11.215 --- 10.0.0.1 ping statistics --- 00:25:11.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.215 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:25:11.215 02:44:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.215 02:44:44 -- nvmf/common.sh@411 -- # return 0 00:25:11.215 02:44:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:11.215 02:44:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.215 02:44:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:11.215 02:44:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:11.215 02:44:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.215 02:44:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:11.215 02:44:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:11.215 02:44:44 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:11.215 02:44:44 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:11.215 02:44:44 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:11.215 02:44:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:11.215 02:44:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:11.215 02:44:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.215 ************************************ 00:25:11.215 START TEST nvmf_digest_clean 00:25:11.215 ************************************ 00:25:11.215 02:44:44 -- common/autotest_common.sh@1111 -- # run_digest 00:25:11.215 02:44:44 -- host/digest.sh@120 -- # local dsa_initiator 00:25:11.215 02:44:44 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:11.215 02:44:44 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:11.215 02:44:44 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:11.216 02:44:44 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:11.216 02:44:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:11.216 02:44:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:11.216 02:44:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.478 02:44:44 -- nvmf/common.sh@470 -- # nvmfpid=262514 00:25:11.478 02:44:44 -- nvmf/common.sh@471 -- # waitforlisten 262514 00:25:11.478 02:44:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:11.478 02:44:44 -- common/autotest_common.sh@817 -- # '[' -z 262514 ']' 00:25:11.478 02:44:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.478 02:44:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:11.478 02:44:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.478 02:44:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:11.478 02:44:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.478 [2024-04-27 02:44:44.894900] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:11.478 [2024-04-27 02:44:44.894945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.478 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.478 [2024-04-27 02:44:44.959108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.478 [2024-04-27 02:44:45.021207] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.478 [2024-04-27 02:44:45.021243] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.478 [2024-04-27 02:44:45.021250] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.478 [2024-04-27 02:44:45.021257] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.478 [2024-04-27 02:44:45.021263] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.478 [2024-04-27 02:44:45.021293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.129 02:44:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:12.129 02:44:45 -- common/autotest_common.sh@850 -- # return 0 00:25:12.129 02:44:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:12.129 02:44:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:12.129 02:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:12.129 02:44:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.129 02:44:45 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:12.129 02:44:45 -- host/digest.sh@126 -- # common_target_config 00:25:12.129 02:44:45 -- host/digest.sh@43 -- # rpc_cmd 00:25:12.129 02:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.129 02:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:12.406 null0 00:25:12.406 [2024-04-27 02:44:45.784215] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.406 [2024-04-27 02:44:45.808412] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.406 02:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.406 02:44:45 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:12.406 02:44:45 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:12.406 02:44:45 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:12.406 02:44:45 -- host/digest.sh@80 -- # rw=randread 00:25:12.406 02:44:45 -- host/digest.sh@80 -- # bs=4096 00:25:12.406 02:44:45 -- host/digest.sh@80 -- # qd=128 00:25:12.406 02:44:45 -- host/digest.sh@80 -- # scan_dsa=false 00:25:12.406 02:44:45 -- host/digest.sh@83 -- # bperfpid=262861 00:25:12.406 02:44:45 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:12.406 02:44:45 -- host/digest.sh@84 -- # waitforlisten 262861 /var/tmp/bperf.sock 00:25:12.406 02:44:45 -- common/autotest_common.sh@817 -- # '[' -z 262861 ']' 00:25:12.406 02:44:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.406 02:44:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:12.406 02:44:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.406 02:44:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:12.406 02:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:12.406 [2024-04-27 02:44:45.844121] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:12.406 [2024-04-27 02:44:45.844169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262861 ] 00:25:12.406 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.406 [2024-04-27 02:44:45.901653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.406 [2024-04-27 02:44:45.963669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.349 02:44:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:13.349 02:44:46 -- common/autotest_common.sh@850 -- # return 0 00:25:13.349 02:44:46 -- host/digest.sh@86 -- # false 00:25:13.349 02:44:46 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.349 02:44:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:13.349 02:44:46 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.349 02:44:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.611 nvme0n1 00:25:13.611 02:44:47 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:13.611 02:44:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.611 Running I/O for 2 seconds... 00:25:16.158 00:25:16.158 Latency(us) 00:25:16.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.158 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:16.158 nvme0n1 : 2.00 20312.02 79.34 0.00 0.00 6293.45 3345.07 20971.52 00:25:16.158 =================================================================================================================== 00:25:16.158 Total : 20312.02 79.34 0.00 0.00 6293.45 3345.07 20971.52 00:25:16.158 0 00:25:16.158 02:44:49 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:16.158 02:44:49 -- host/digest.sh@93 -- # get_accel_stats 00:25:16.158 02:44:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:16.158 02:44:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:16.158 | select(.opcode=="crc32c") 00:25:16.158 | "\(.module_name) \(.executed)"' 00:25:16.158 02:44:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:16.158 02:44:49 -- host/digest.sh@94 -- # false 00:25:16.158 02:44:49 -- host/digest.sh@94 -- # exp_module=software 00:25:16.158 02:44:49 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:16.158 02:44:49 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:16.158 02:44:49 -- host/digest.sh@98 -- # killprocess 262861 00:25:16.158 02:44:49 -- common/autotest_common.sh@936 -- # '[' -z 262861 ']' 00:25:16.158 02:44:49 -- common/autotest_common.sh@940 -- # kill -0 262861 00:25:16.158 02:44:49 -- common/autotest_common.sh@941 -- # uname 00:25:16.158 02:44:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.158 02:44:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 262861 00:25:16.158 02:44:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:16.158 02:44:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:16.159 02:44:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 262861' 00:25:16.159 killing process with pid 262861 00:25:16.159 02:44:49 -- common/autotest_common.sh@955 -- # kill 262861 00:25:16.159 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.159 00:25:16.159 Latency(us) 00:25:16.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.159 =================================================================================================================== 00:25:16.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.159 02:44:49 -- common/autotest_common.sh@960 -- # wait 262861 00:25:16.159 02:44:49 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:16.159 02:44:49 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:16.159 02:44:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:16.159 02:44:49 -- host/digest.sh@80 -- # rw=randread 00:25:16.159 02:44:49 -- host/digest.sh@80 -- # bs=131072 00:25:16.159 02:44:49 -- host/digest.sh@80 -- # qd=16 00:25:16.159 02:44:49 -- host/digest.sh@80 -- # scan_dsa=false 00:25:16.159 02:44:49 -- host/digest.sh@83 -- # bperfpid=263548 00:25:16.159 02:44:49 -- host/digest.sh@84 -- # waitforlisten 263548 /var/tmp/bperf.sock 00:25:16.159 02:44:49 -- common/autotest_common.sh@817 -- # '[' -z 263548 ']' 00:25:16.159 02:44:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.159 02:44:49 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:16.159 02:44:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.159 02:44:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.159 02:44:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.159 02:44:49 -- common/autotest_common.sh@10 -- # set +x 00:25:16.159 [2024-04-27 02:44:49.642585] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:16.159 [2024-04-27 02:44:49.642642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263548 ] 00:25:16.159 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.159 Zero copy mechanism will not be used. 00:25:16.159 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.159 [2024-04-27 02:44:49.700560] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.159 [2024-04-27 02:44:49.762520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.420 02:44:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.420 02:44:49 -- common/autotest_common.sh@850 -- # return 0 00:25:16.420 02:44:49 -- host/digest.sh@86 -- # false 00:25:16.420 02:44:49 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:16.420 02:44:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:16.420 02:44:50 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.420 02:44:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.991 nvme0n1 00:25:16.991 02:44:50 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:16.991 02:44:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:16.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.991 Zero copy mechanism will not be used. 00:25:16.991 Running I/O for 2 seconds... 00:25:18.905 00:25:18.905 Latency(us) 00:25:18.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.905 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:18.905 nvme0n1 : 2.01 1842.87 230.36 0.00 0.00 8677.35 4150.61 16711.68 00:25:18.905 =================================================================================================================== 00:25:18.905 Total : 1842.87 230.36 0.00 0.00 8677.35 4150.61 16711.68 00:25:18.905 0 00:25:18.905 02:44:52 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:18.905 02:44:52 -- host/digest.sh@93 -- # get_accel_stats 00:25:18.905 02:44:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:18.905 02:44:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:18.905 | select(.opcode=="crc32c") 00:25:18.905 | "\(.module_name) \(.executed)"' 00:25:18.905 02:44:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:19.166 02:44:52 -- host/digest.sh@94 -- # false 00:25:19.167 02:44:52 -- host/digest.sh@94 -- # exp_module=software 00:25:19.167 02:44:52 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:19.167 02:44:52 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:19.167 02:44:52 -- host/digest.sh@98 -- # killprocess 263548 00:25:19.167 02:44:52 -- common/autotest_common.sh@936 -- # '[' -z 263548 ']' 00:25:19.167 02:44:52 -- common/autotest_common.sh@940 -- # kill -0 263548 00:25:19.167 02:44:52 -- common/autotest_common.sh@941 -- # uname 00:25:19.167 02:44:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.167 02:44:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 263548 00:25:19.167 02:44:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:19.167 02:44:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:19.167 02:44:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 263548' 00:25:19.167 killing process with pid 263548 00:25:19.167 02:44:52 -- common/autotest_common.sh@955 -- # kill 263548 00:25:19.167 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.167 00:25:19.167 Latency(us) 00:25:19.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.167 =================================================================================================================== 00:25:19.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.167 02:44:52 -- common/autotest_common.sh@960 -- # wait 263548 00:25:19.167 02:44:52 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:19.167 02:44:52 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:19.167 02:44:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:19.167 02:44:52 -- host/digest.sh@80 -- # rw=randwrite 00:25:19.167 02:44:52 -- host/digest.sh@80 -- # bs=4096 00:25:19.167 02:44:52 -- host/digest.sh@80 -- # qd=128 00:25:19.167 02:44:52 -- host/digest.sh@80 -- # scan_dsa=false 00:25:19.167 02:44:52 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:19.167 02:44:52 -- host/digest.sh@83 -- # bperfpid=264222 00:25:19.167 02:44:52 -- host/digest.sh@84 -- # waitforlisten 264222 /var/tmp/bperf.sock 00:25:19.167 02:44:52 -- common/autotest_common.sh@817 -- # '[' -z 264222 ']' 00:25:19.167 02:44:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:19.167 02:44:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.167 02:44:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:19.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:19.167 02:44:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.167 02:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:19.428 [2024-04-27 02:44:52.795729] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:19.428 [2024-04-27 02:44:52.795774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264222 ] 00:25:19.428 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.428 [2024-04-27 02:44:52.851530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.428 [2024-04-27 02:44:52.913320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.428 02:44:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:19.428 02:44:52 -- common/autotest_common.sh@850 -- # return 0 00:25:19.428 02:44:52 -- host/digest.sh@86 -- # false 00:25:19.428 02:44:52 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:19.428 02:44:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:19.689 02:44:53 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.689 02:44:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.950 nvme0n1 00:25:19.950 02:44:53 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:19.950 02:44:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.950 Running I/O for 2 seconds... 00:25:22.497 00:25:22.497 Latency(us) 00:25:22.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.497 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:22.497 nvme0n1 : 2.00 21193.18 82.79 0.00 0.00 6029.96 5051.73 20097.71 00:25:22.497 =================================================================================================================== 00:25:22.497 Total : 21193.18 82.79 0.00 0.00 6029.96 5051.73 20097.71 00:25:22.497 0 00:25:22.497 02:44:55 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:22.497 02:44:55 -- host/digest.sh@93 -- # get_accel_stats 00:25:22.497 02:44:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:22.497 02:44:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:22.497 02:44:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:22.497 | select(.opcode=="crc32c") 00:25:22.497 | "\(.module_name) \(.executed)"' 00:25:22.497 02:44:55 -- host/digest.sh@94 -- # false 00:25:22.497 02:44:55 -- host/digest.sh@94 -- # exp_module=software 00:25:22.497 02:44:55 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:22.497 02:44:55 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:22.497 02:44:55 -- host/digest.sh@98 -- # killprocess 264222 00:25:22.497 02:44:55 -- common/autotest_common.sh@936 -- # '[' -z 264222 ']' 00:25:22.497 02:44:55 -- common/autotest_common.sh@940 -- # kill -0 264222 00:25:22.497 02:44:55 -- common/autotest_common.sh@941 -- # uname 00:25:22.497 02:44:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.497 02:44:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 264222 00:25:22.497 02:44:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:22.497 02:44:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:22.497 02:44:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 264222' 00:25:22.497 killing process with pid 264222 00:25:22.497 02:44:55 -- common/autotest_common.sh@955 -- # kill 264222 00:25:22.497 Received shutdown signal, test time was about 2.000000 seconds 00:25:22.497 00:25:22.497 Latency(us) 00:25:22.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.497 =================================================================================================================== 00:25:22.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.497 02:44:55 -- common/autotest_common.sh@960 -- # wait 264222 00:25:22.497 02:44:55 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:22.497 02:44:55 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:22.497 02:44:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:22.497 02:44:55 -- host/digest.sh@80 -- # rw=randwrite 00:25:22.497 02:44:55 -- host/digest.sh@80 -- # bs=131072 00:25:22.497 02:44:55 -- host/digest.sh@80 -- # qd=16 00:25:22.497 02:44:55 -- host/digest.sh@80 -- # scan_dsa=false 00:25:22.497 02:44:55 -- host/digest.sh@83 -- # bperfpid=264739 00:25:22.497 02:44:55 -- host/digest.sh@84 -- # waitforlisten 264739 /var/tmp/bperf.sock 00:25:22.497 02:44:55 -- common/autotest_common.sh@817 -- # '[' -z 264739 ']' 00:25:22.497 02:44:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:22.497 02:44:55 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:22.497 02:44:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:22.497 02:44:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:22.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:22.497 02:44:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:22.497 02:44:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.497 [2024-04-27 02:44:55.933538] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:22.497 [2024-04-27 02:44:55.933637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264739 ] 00:25:22.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:22.497 Zero copy mechanism will not be used. 00:25:22.497 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.497 [2024-04-27 02:44:55.996355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.497 [2024-04-27 02:44:56.058529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.459 02:44:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:23.459 02:44:56 -- common/autotest_common.sh@850 -- # return 0 00:25:23.459 02:44:56 -- host/digest.sh@86 -- # false 00:25:23.459 02:44:56 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:23.459 02:44:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:23.459 02:44:56 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.459 02:44:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.720 nvme0n1 00:25:23.720 02:44:57 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:23.720 02:44:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:23.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.720 Zero copy mechanism will not be used. 00:25:23.720 Running I/O for 2 seconds... 00:25:26.265 00:25:26.265 Latency(us) 00:25:26.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.265 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:26.265 nvme0n1 : 2.01 2240.03 280.00 0.00 0.00 7127.91 5297.49 26760.53 00:25:26.265 =================================================================================================================== 00:25:26.265 Total : 2240.03 280.00 0.00 0.00 7127.91 5297.49 26760.53 00:25:26.265 0 00:25:26.265 02:44:59 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:26.265 02:44:59 -- host/digest.sh@93 -- # get_accel_stats 00:25:26.265 02:44:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:26.265 02:44:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:26.265 | select(.opcode=="crc32c") 00:25:26.265 | "\(.module_name) \(.executed)"' 00:25:26.265 02:44:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:26.265 02:44:59 -- host/digest.sh@94 -- # false 00:25:26.265 02:44:59 -- host/digest.sh@94 -- # exp_module=software 00:25:26.265 02:44:59 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:26.265 02:44:59 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:26.266 02:44:59 -- host/digest.sh@98 -- # killprocess 264739 00:25:26.266 02:44:59 -- common/autotest_common.sh@936 -- # '[' -z 264739 ']' 00:25:26.266 02:44:59 -- common/autotest_common.sh@940 -- # kill -0 264739 00:25:26.266 02:44:59 -- common/autotest_common.sh@941 -- # uname 00:25:26.266 02:44:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:26.266 02:44:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 264739 00:25:26.266 02:44:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:26.266 02:44:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:26.266 02:44:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 264739' 00:25:26.266 killing process with pid 264739 00:25:26.266 02:44:59 -- common/autotest_common.sh@955 -- # kill 264739 00:25:26.266 Received shutdown signal, test time was about 2.000000 seconds 00:25:26.266 00:25:26.266 Latency(us) 00:25:26.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.266 =================================================================================================================== 00:25:26.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.266 02:44:59 -- common/autotest_common.sh@960 -- # wait 264739 00:25:26.266 02:44:59 -- host/digest.sh@132 -- # killprocess 262514 00:25:26.266 02:44:59 -- common/autotest_common.sh@936 -- # '[' -z 262514 ']' 00:25:26.266 02:44:59 -- common/autotest_common.sh@940 -- # kill -0 262514 00:25:26.266 02:44:59 -- common/autotest_common.sh@941 -- # uname 00:25:26.266 02:44:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:26.266 02:44:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 262514 00:25:26.266 02:44:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:26.266 02:44:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:26.266 02:44:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 262514' 00:25:26.266 killing process with pid 262514 00:25:26.266 02:44:59 -- common/autotest_common.sh@955 -- # kill 262514 00:25:26.266 02:44:59 -- common/autotest_common.sh@960 -- # wait 262514 00:25:26.526 00:25:26.527 real 0m15.059s 00:25:26.527 user 0m29.670s 00:25:26.527 sys 0m2.839s 00:25:26.527 02:44:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:26.527 02:44:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 ************************************ 00:25:26.527 END TEST nvmf_digest_clean 00:25:26.527 ************************************ 00:25:26.527 02:44:59 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:26.527 02:44:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:26.527 02:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:26.527 02:44:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 ************************************ 00:25:26.527 START TEST nvmf_digest_error 00:25:26.527 ************************************ 00:25:26.527 02:45:00 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:26.527 02:45:00 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:26.527 02:45:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:26.527 02:45:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:26.527 02:45:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 02:45:00 -- nvmf/common.sh@470 -- # nvmfpid=265635 00:25:26.527 02:45:00 -- nvmf/common.sh@471 -- # waitforlisten 265635 00:25:26.527 02:45:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:26.527 02:45:00 -- common/autotest_common.sh@817 -- # '[' -z 265635 ']' 00:25:26.527 02:45:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.527 02:45:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:26.527 02:45:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.527 02:45:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:26.527 02:45:00 -- common/autotest_common.sh@10 -- # set +x 00:25:26.527 [2024-04-27 02:45:00.127902] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:26.527 [2024-04-27 02:45:00.127953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.787 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.787 [2024-04-27 02:45:00.192494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.787 [2024-04-27 02:45:00.254347] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.787 [2024-04-27 02:45:00.254384] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.787 [2024-04-27 02:45:00.254392] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.787 [2024-04-27 02:45:00.254398] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.787 [2024-04-27 02:45:00.254404] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.787 [2024-04-27 02:45:00.254422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.359 02:45:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:27.359 02:45:00 -- common/autotest_common.sh@850 -- # return 0 00:25:27.359 02:45:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:27.359 02:45:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:27.359 02:45:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.359 02:45:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.359 02:45:00 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:27.359 02:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.359 02:45:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.359 [2024-04-27 02:45:00.924337] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:27.359 02:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.359 02:45:00 -- host/digest.sh@105 -- # common_target_config 00:25:27.359 02:45:00 -- host/digest.sh@43 -- # rpc_cmd 00:25:27.359 02:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.359 02:45:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.621 null0 00:25:27.621 [2024-04-27 02:45:01.005145] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.621 [2024-04-27 02:45:01.029351] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.621 02:45:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.621 02:45:01 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:27.621 02:45:01 -- host/digest.sh@54 -- # local rw bs qd 00:25:27.621 02:45:01 -- host/digest.sh@56 -- # rw=randread 00:25:27.621 02:45:01 -- host/digest.sh@56 -- # bs=4096 00:25:27.621 02:45:01 -- host/digest.sh@56 -- # qd=128 00:25:27.621 02:45:01 -- host/digest.sh@58 -- # bperfpid=265814 00:25:27.621 02:45:01 -- host/digest.sh@60 -- # waitforlisten 265814 /var/tmp/bperf.sock 00:25:27.621 02:45:01 -- common/autotest_common.sh@817 -- # '[' -z 265814 ']' 00:25:27.621 02:45:01 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:27.621 02:45:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.621 02:45:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:27.621 02:45:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.621 02:45:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:27.621 02:45:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.621 [2024-04-27 02:45:01.080975] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:27.621 [2024-04-27 02:45:01.081021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265814 ] 00:25:27.621 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.621 [2024-04-27 02:45:01.138790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.621 [2024-04-27 02:45:01.201071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.564 02:45:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:28.564 02:45:01 -- common/autotest_common.sh@850 -- # return 0 00:25:28.564 02:45:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.564 02:45:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.564 02:45:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:28.564 02:45:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.564 02:45:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.564 02:45:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.564 02:45:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.564 02:45:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.826 nvme0n1 00:25:28.826 02:45:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:28.826 02:45:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.826 02:45:02 -- common/autotest_common.sh@10 -- # set +x 00:25:28.826 02:45:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.826 02:45:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:28.826 02:45:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.826 Running I/O for 2 seconds... 00:25:28.826 [2024-04-27 02:45:02.332944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.332979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.332990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.346729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.346752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.346762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.359220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.359242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.359251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.371702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.371723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.371733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.385231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.385252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.385261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.397774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.397794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.397803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.410372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.410393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.410402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.423264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.423289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.423298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.826 [2024-04-27 02:45:02.435965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:28.826 [2024-04-27 02:45:02.435986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.826 [2024-04-27 02:45:02.435995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.448425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.448445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.448454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.460080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.460100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.460109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.472524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.472544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.472553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.484790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.484811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.484819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.497995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.498016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.498024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.510839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.510859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.510873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.524177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.524198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.524207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.536487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.536508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.536516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.549518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.549540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.549549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.561901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.561922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.561931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.574540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.574560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.574569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.587623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.587643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.587652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.599224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.599245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.599254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.613325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.613345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.613354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.624639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.624665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.624675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.089 [2024-04-27 02:45:02.637111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.089 [2024-04-27 02:45:02.637132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.089 [2024-04-27 02:45:02.637140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.090 [2024-04-27 02:45:02.650008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.090 [2024-04-27 02:45:02.650028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.090 [2024-04-27 02:45:02.650037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.090 [2024-04-27 02:45:02.663641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.090 [2024-04-27 02:45:02.663662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.090 [2024-04-27 02:45:02.663670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.090 [2024-04-27 02:45:02.674780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.090 [2024-04-27 02:45:02.674801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.090 [2024-04-27 02:45:02.674809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.090 [2024-04-27 02:45:02.687486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.090 [2024-04-27 02:45:02.687506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.090 [2024-04-27 02:45:02.687515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.090 [2024-04-27 02:45:02.700236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.090 [2024-04-27 02:45:02.700256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.090 [2024-04-27 02:45:02.700265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.713984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.714004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.714013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.727071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.727092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.727100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.739224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.739244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.739253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.751807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.751828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.751837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.763824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.763844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.763853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.776716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.776737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.776746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.790405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.790426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.790434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.802889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.802909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.802917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.815990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.816010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.816018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.828121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.828141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.828150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.841334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.841354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.841367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.853699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.853719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.853728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.865194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.865214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.865223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.878056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.878076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.878085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.892322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.892343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.892352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.906172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.906192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.906201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.918738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.918758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.918767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.930367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.930387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.930395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.943685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.943705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.943714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.956596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.956617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.352 [2024-04-27 02:45:02.956625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.352 [2024-04-27 02:45:02.968635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.352 [2024-04-27 02:45:02.968655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.353 [2024-04-27 02:45:02.968664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:02.980446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:02.980467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:02.980476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:02.995180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:02.995200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:02.995209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.007206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.007227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.007236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.019547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.019567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.019575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.032016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.032036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.032045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.044603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.044624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.044632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.057342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.057362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.057374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.069835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.069855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.069863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.082103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.082124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.082132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.095080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.095101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.095109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.107500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.107521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.107530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.119765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.119785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.119793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.132791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.132812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.132820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.144468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.144488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.144497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.156487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.156508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.156516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.168917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.168941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.168950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.182038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.182058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.182067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.195399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.195418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.195426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.207849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.207877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.614 [2024-04-27 02:45:03.220474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.614 [2024-04-27 02:45:03.220494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.614 [2024-04-27 02:45:03.220502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.876 [2024-04-27 02:45:03.233398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.876 [2024-04-27 02:45:03.233419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.876 [2024-04-27 02:45:03.233428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.876 [2024-04-27 02:45:03.245606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.245625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.245633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.258040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.258059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.258068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.270272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.270297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.270305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.283285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.283305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.283313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.295728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.295748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.295756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.307633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.307654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.307662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.320522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.320542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.320551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.333331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.333351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.333360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.346858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.346878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.346887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.359009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.359029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.359037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.371489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.371510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.371518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.384225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.384257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.396553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.396574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.396582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.407875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.407895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.407904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.422501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.422522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.422530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.434801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.434822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.434831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.447756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.447777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.447785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.460716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.460736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.460744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.473097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.473117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.473125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.877 [2024-04-27 02:45:03.486091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:29.877 [2024-04-27 02:45:03.486111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.877 [2024-04-27 02:45:03.486119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.138 [2024-04-27 02:45:03.497499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.138 [2024-04-27 02:45:03.497522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.138 [2024-04-27 02:45:03.497531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.138 [2024-04-27 02:45:03.509826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.138 [2024-04-27 02:45:03.509847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.138 [2024-04-27 02:45:03.509855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.138 [2024-04-27 02:45:03.522564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.138 [2024-04-27 02:45:03.522585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.138 [2024-04-27 02:45:03.522594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.138 [2024-04-27 02:45:03.534667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.138 [2024-04-27 02:45:03.534687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.534696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.548193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.548214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.548222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.560753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.560773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.560781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.573131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.573151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.573160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.585606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.585627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.585636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.598015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.598035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.598047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.610606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.610627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.610636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.622894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.622915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.622924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.635629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.635649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.635657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.648028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.648048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.648056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.660695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.660715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.660723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.672791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.672812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.672822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.685553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.685573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.685581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.697807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.697827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.697836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.710258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.710286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.710295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.722564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.722584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.722592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.735805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.735825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.139 [2024-04-27 02:45:03.748142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.139 [2024-04-27 02:45:03.748162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.139 [2024-04-27 02:45:03.748171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.760526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.760546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.760554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.772812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.772832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.772840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.785385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.785405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.785413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.797861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.797881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.797890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.810376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.810397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.810405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.822950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.822971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.822979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.834898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.834918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.834926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.848121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.848142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.848150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.860803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.860823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.860832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.873226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.873246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.873255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.884841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.884861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.884870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.897677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.897698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.897706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.911135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.911155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.911163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.923781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.923801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.923813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.936103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.936124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.936132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.948730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.948750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.948759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.961262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.961287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.961296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.973565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.973586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.973595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.987231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.987252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.987261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:03.999217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:03.999237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:03.999245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.400 [2024-04-27 02:45:04.011649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.400 [2024-04-27 02:45:04.011669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.400 [2024-04-27 02:45:04.011677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.024176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.024196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.024205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.036534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.036559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.036567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.048879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.048900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.048909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.061406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.061426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.061434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.073703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.073723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.073732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.086229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.086250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.086258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.098521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.098542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.661 [2024-04-27 02:45:04.098551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.661 [2024-04-27 02:45:04.111280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.661 [2024-04-27 02:45:04.111300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.111309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.125219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.125239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.125247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.138753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.138774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.138782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.151106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.151126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.151135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.163502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.163523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.163531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.175978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.175998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.176006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.188295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.188315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.188323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.200950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.200970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.200979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.213248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.213268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.213281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.225697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.225719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.225728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.238109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.238131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.238141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.250494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.250520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.250530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.262618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.262640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.262650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.662 [2024-04-27 02:45:04.275039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.662 [2024-04-27 02:45:04.275059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.662 [2024-04-27 02:45:04.275067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.923 [2024-04-27 02:45:04.287968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.923 [2024-04-27 02:45:04.287989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.923 [2024-04-27 02:45:04.287998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.923 [2024-04-27 02:45:04.301143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.923 [2024-04-27 02:45:04.301164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.923 [2024-04-27 02:45:04.301173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.923 [2024-04-27 02:45:04.311439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2130aa0) 00:25:30.923 [2024-04-27 02:45:04.311459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.923 [2024-04-27 02:45:04.311467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.923 00:25:30.923 Latency(us) 00:25:30.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.923 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:30.923 nvme0n1 : 2.00 20115.20 78.58 0.00 0.00 6354.05 3181.23 19333.12 00:25:30.923 =================================================================================================================== 00:25:30.923 Total : 20115.20 78.58 0.00 0.00 6354.05 3181.23 19333.12 00:25:30.923 0 00:25:30.923 02:45:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:30.923 02:45:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:30.923 02:45:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:30.923 | .driver_specific 00:25:30.923 | .nvme_error 00:25:30.923 | .status_code 00:25:30.923 | .command_transient_transport_error' 00:25:30.923 02:45:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:30.923 02:45:04 -- host/digest.sh@71 -- # (( 158 > 0 )) 00:25:30.923 02:45:04 -- host/digest.sh@73 -- # killprocess 265814 00:25:30.923 02:45:04 -- common/autotest_common.sh@936 -- # '[' -z 265814 ']' 00:25:30.923 02:45:04 -- common/autotest_common.sh@940 -- # kill -0 265814 00:25:30.923 02:45:04 -- common/autotest_common.sh@941 -- # uname 00:25:30.923 02:45:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.923 02:45:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 265814 00:25:31.184 02:45:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:31.184 02:45:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:31.184 02:45:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 265814' 00:25:31.184 killing process with pid 265814 00:25:31.184 02:45:04 -- common/autotest_common.sh@955 -- # kill 265814 00:25:31.184 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.185 00:25:31.185 Latency(us) 00:25:31.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.185 =================================================================================================================== 00:25:31.185 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.185 02:45:04 -- common/autotest_common.sh@960 -- # wait 265814 00:25:31.185 02:45:04 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:31.185 02:45:04 -- host/digest.sh@54 -- # local rw bs qd 00:25:31.185 02:45:04 -- host/digest.sh@56 -- # rw=randread 00:25:31.185 02:45:04 -- host/digest.sh@56 -- # bs=131072 00:25:31.185 02:45:04 -- host/digest.sh@56 -- # qd=16 00:25:31.185 02:45:04 -- host/digest.sh@58 -- # bperfpid=266621 00:25:31.185 02:45:04 -- host/digest.sh@60 -- # waitforlisten 266621 /var/tmp/bperf.sock 00:25:31.185 02:45:04 -- common/autotest_common.sh@817 -- # '[' -z 266621 ']' 00:25:31.185 02:45:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:31.185 02:45:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:31.185 02:45:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:31.185 02:45:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:31.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:31.185 02:45:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:31.185 02:45:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.185 [2024-04-27 02:45:04.744271] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:31.185 [2024-04-27 02:45:04.744330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266621 ] 00:25:31.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.185 Zero copy mechanism will not be used. 00:25:31.185 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.185 [2024-04-27 02:45:04.801784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.446 [2024-04-27 02:45:04.863523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.018 02:45:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:32.018 02:45:05 -- common/autotest_common.sh@850 -- # return 0 00:25:32.018 02:45:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:32.018 02:45:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:32.280 02:45:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:32.280 02:45:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.280 02:45:05 -- common/autotest_common.sh@10 -- # set +x 00:25:32.280 02:45:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.280 02:45:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.280 02:45:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.541 nvme0n1 00:25:32.541 02:45:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:32.541 02:45:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.541 02:45:06 -- common/autotest_common.sh@10 -- # set +x 00:25:32.541 02:45:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.541 02:45:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:32.541 02:45:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:32.804 Zero copy mechanism will not be used. 00:25:32.804 Running I/O for 2 seconds... 00:25:32.804 [2024-04-27 02:45:06.198916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.198953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.198965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.215360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.215386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.215396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.229041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.229065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.229075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.246330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.246353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.246362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.261165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.261187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.261196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.272776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.272799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.272807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.284110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.284132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.284141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.295339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.295362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.295376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.306754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.306777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.306786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.318545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.318567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.318576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.332404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.332426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.332434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.344120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.344141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.344150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.355807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.355829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.366831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.366853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.366862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.378914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.378936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.378945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.391138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.391161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.391170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.402866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.402892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.402901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.804 [2024-04-27 02:45:06.414771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:32.804 [2024-04-27 02:45:06.414793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.804 [2024-04-27 02:45:06.414801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.426420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.426442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.426451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.437604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.437625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.437634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.448188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.448209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.448218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.459055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.459076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.459085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.470879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.470909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.482411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.482433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.482442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.495238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.495260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.495268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.508433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.508454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.508463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.519209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.519231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.519239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.530248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.530270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.530283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.541965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.541987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.541995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.552594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.552616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.552624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.562549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.562570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.562579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.572028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.572049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.572057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.581464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.581485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.581493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.590868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.590889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.590902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.600306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.600328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.600336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.609952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.609973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.609981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.619606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.619628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.619636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.629013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.629035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.629043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.638468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.638489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.638498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.648077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.648098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.648107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.657427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.657449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.657457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.666941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.666962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.666970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.067 [2024-04-27 02:45:06.676253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.067 [2024-04-27 02:45:06.676286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.067 [2024-04-27 02:45:06.676295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.685611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.685633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.685642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.695202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.695224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.695232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.704552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.704574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.704582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.713881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.713902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.713911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.723541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.723562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.733202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.733223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.733231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.743523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.743544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.743553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.753201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.753223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.753235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.762636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.762665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.771989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.772010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.781345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.781367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.781376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.790974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.790995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.791003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.800315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.800336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.800345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.809636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.809666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.818960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.818981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.818990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.828501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.828521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.828529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.837838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.837862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.330 [2024-04-27 02:45:06.837870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.330 [2024-04-27 02:45:06.847498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.330 [2024-04-27 02:45:06.847518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.847526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.857140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.857160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.857169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.868299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.868320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.868328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.879052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.879074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.879083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.891897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.891919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.891927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.904082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.904103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.904112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.918044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.918067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.918076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.931690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.931711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.931720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.331 [2024-04-27 02:45:06.945117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.331 [2024-04-27 02:45:06.945138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.331 [2024-04-27 02:45:06.945147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:06.961344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:06.961365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:06.961374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:06.974405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:06.974426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:06.974434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:06.986776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:06.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:06.986804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:06.997223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:06.997244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:06.997252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.006861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.006882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.006890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.016170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.016190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.016198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.025466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.025487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.025496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.034893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.034914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.034926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.044218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.044239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.044247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.053675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.053696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.053704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.063004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.063025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.063033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.072305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.072326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.072334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.081656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.081685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.091117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.091138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.091146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.100601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.100622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.100630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.593 [2024-04-27 02:45:07.109878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.593 [2024-04-27 02:45:07.109898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.593 [2024-04-27 02:45:07.109906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.119452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.119476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.119485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.129281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.129303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.129311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.138839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.138860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.138868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.148404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.148425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.148434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.157829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.157850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.157858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.167147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.167167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.167176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.176527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.176548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.176556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.185846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.185867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.185875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.195317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.195338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.195346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.594 [2024-04-27 02:45:07.204672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.594 [2024-04-27 02:45:07.204692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.594 [2024-04-27 02:45:07.204700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.214336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.214357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.214365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.223822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.223843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.223851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.233183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.233203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.233212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.242520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.242541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.242549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.251860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.251881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.251889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.261273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.261300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.261308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.270605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.270626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.270634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.280047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.280071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.280079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.856 [2024-04-27 02:45:07.289400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.856 [2024-04-27 02:45:07.289421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.856 [2024-04-27 02:45:07.289429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.299030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.299051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.299059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.308374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.308394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.308403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.317833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.317853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.317861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.327225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.327245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.327254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.336751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.336771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.336780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.346069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.346090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.346098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.355619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.355640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.355649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.365093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.365113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.365122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.374416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.374437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.374446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.383937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.383958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.383966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.393558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.393579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.393587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.402888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.402909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.402917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.412221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.412242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.412251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.421712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.421733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.421741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.431224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.431244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.431253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.440546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.440567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.449946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.449968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.449976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.459343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.459364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.459373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.857 [2024-04-27 02:45:07.468673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:33.857 [2024-04-27 02:45:07.468693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.857 [2024-04-27 02:45:07.468701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.478231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.478252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.478261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.487585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.487606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.487614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.497060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.497081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.497089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.506452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.506472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.506481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.515840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.515860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.515868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.525178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.525202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.525211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.534439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.534460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.534468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.543749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.543769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.543778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.553074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.553095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.553103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.562522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.562542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.562551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.571962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.571991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.581309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.581330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.581338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.590626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.590646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.590654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.600133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.600154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.600162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.609405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.609425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.609433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.618721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.618742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.628041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.628061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.628069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.637365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.637386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.637394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.646674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.646694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.646703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.655987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.656007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.656016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.665314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.665334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.665343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.674847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.674867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.674875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.684176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.120 [2024-04-27 02:45:07.684197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.120 [2024-04-27 02:45:07.684209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.120 [2024-04-27 02:45:07.693501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.121 [2024-04-27 02:45:07.693521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.121 [2024-04-27 02:45:07.693530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.121 [2024-04-27 02:45:07.702868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.121 [2024-04-27 02:45:07.702888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.121 [2024-04-27 02:45:07.702897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.121 [2024-04-27 02:45:07.712169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.121 [2024-04-27 02:45:07.712189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.121 [2024-04-27 02:45:07.712197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.121 [2024-04-27 02:45:07.721491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.121 [2024-04-27 02:45:07.721512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.121 [2024-04-27 02:45:07.721520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.121 [2024-04-27 02:45:07.731071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.121 [2024-04-27 02:45:07.731091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.121 [2024-04-27 02:45:07.731099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.740537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.740559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.740567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.750024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.750045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.750054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.759344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.759365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.759373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.768754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.768775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.768783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.778236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.778267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.787705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.787725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.787734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.797024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.797044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.383 [2024-04-27 02:45:07.797052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.383 [2024-04-27 02:45:07.806282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.383 [2024-04-27 02:45:07.806303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.806313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.815564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.815584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.815593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.824956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.824977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.824985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.834666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.834687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.834695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.844086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.844107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.844118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.853372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.853392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.853401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.862858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.862878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.862887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.872184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.872205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.872213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.881522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.881543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.881551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.890925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.890945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.890954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.900203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.900224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.900232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.909541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.909562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.909570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.919236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.919257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.919265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.928727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.928754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.928762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.938050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.938070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.938078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.947418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.947438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.947446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.956787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.956807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.956816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.966096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.966116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.975453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.975473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.975483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.984941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.984962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.984970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.384 [2024-04-27 02:45:07.994234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.384 [2024-04-27 02:45:07.994255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.384 [2024-04-27 02:45:07.994263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.646 [2024-04-27 02:45:08.003533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.646 [2024-04-27 02:45:08.003554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.646 [2024-04-27 02:45:08.003563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.646 [2024-04-27 02:45:08.012848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.012869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.012877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.022312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.022333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.022341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.031795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.031815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.031823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.041354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.041375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.041383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.050676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.050696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.050705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.060139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.060160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.060168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.069459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.069480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.069488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.078746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.078775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.088159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.088179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.088191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.097489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.097509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.097518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.106959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.106980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.106988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.116350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.116371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.116380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.125662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.125683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.125691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.135070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.135091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.135099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.144505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.144526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.144535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.153963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.153983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.153992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.163308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.163328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.163336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.172751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.172776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.172784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.647 [2024-04-27 02:45:08.182255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc0bbf0) 00:25:34.647 [2024-04-27 02:45:08.182282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.647 [2024-04-27 02:45:08.182291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.647 00:25:34.647 Latency(us) 00:25:34.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.647 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:34.647 nvme0n1 : 2.01 3077.16 384.64 0.00 0.00 5195.10 4532.91 18677.76 00:25:34.647 =================================================================================================================== 00:25:34.647 Total : 3077.16 384.64 0.00 0.00 5195.10 4532.91 18677.76 00:25:34.647 0 00:25:34.647 02:45:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:34.647 02:45:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:34.647 02:45:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:34.647 | .driver_specific 00:25:34.647 | .nvme_error 00:25:34.648 | .status_code 00:25:34.648 | .command_transient_transport_error' 00:25:34.648 02:45:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:34.909 02:45:08 -- host/digest.sh@71 -- # (( 199 > 0 )) 00:25:34.909 02:45:08 -- host/digest.sh@73 -- # killprocess 266621 00:25:34.909 02:45:08 -- common/autotest_common.sh@936 -- # '[' -z 266621 ']' 00:25:34.909 02:45:08 -- common/autotest_common.sh@940 -- # kill -0 266621 00:25:34.909 02:45:08 -- common/autotest_common.sh@941 -- # uname 00:25:34.909 02:45:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:34.909 02:45:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 266621 00:25:34.909 02:45:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:34.909 02:45:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:34.909 02:45:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 266621' 00:25:34.909 killing process with pid 266621 00:25:34.909 02:45:08 -- common/autotest_common.sh@955 -- # kill 266621 00:25:34.909 Received shutdown signal, test time was about 2.000000 seconds 00:25:34.909 00:25:34.909 Latency(us) 00:25:34.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.909 =================================================================================================================== 00:25:34.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.909 02:45:08 -- common/autotest_common.sh@960 -- # wait 266621 00:25:35.170 02:45:08 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:35.170 02:45:08 -- host/digest.sh@54 -- # local rw bs qd 00:25:35.170 02:45:08 -- host/digest.sh@56 -- # rw=randwrite 00:25:35.170 02:45:08 -- host/digest.sh@56 -- # bs=4096 00:25:35.170 02:45:08 -- host/digest.sh@56 -- # qd=128 00:25:35.170 02:45:08 -- host/digest.sh@58 -- # bperfpid=267440 00:25:35.170 02:45:08 -- host/digest.sh@60 -- # waitforlisten 267440 /var/tmp/bperf.sock 00:25:35.170 02:45:08 -- common/autotest_common.sh@817 -- # '[' -z 267440 ']' 00:25:35.170 02:45:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:35.170 02:45:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.170 02:45:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.170 02:45:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.170 02:45:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.170 02:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:35.170 [2024-04-27 02:45:08.601241] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:35.170 [2024-04-27 02:45:08.601302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid267440 ] 00:25:35.170 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.170 [2024-04-27 02:45:08.659108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.170 [2024-04-27 02:45:08.721221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.114 02:45:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:36.114 02:45:09 -- common/autotest_common.sh@850 -- # return 0 00:25:36.114 02:45:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.114 02:45:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.114 02:45:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:36.114 02:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.114 02:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:36.114 02:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.114 02:45:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.114 02:45:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.375 nvme0n1 00:25:36.375 02:45:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:36.375 02:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.375 02:45:09 -- common/autotest_common.sh@10 -- # set +x 00:25:36.375 02:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.375 02:45:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:36.375 02:45:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.375 Running I/O for 2 seconds... 00:25:36.375 [2024-04-27 02:45:09.973487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.375 [2024-04-27 02:45:09.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.375 [2024-04-27 02:45:09.973929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.375 [2024-04-27 02:45:09.986105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.375 [2024-04-27 02:45:09.986413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.375 [2024-04-27 02:45:09.986434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.637 [2024-04-27 02:45:09.998691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.637 [2024-04-27 02:45:09.998968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.637 [2024-04-27 02:45:09.998990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.637 [2024-04-27 02:45:10.011776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.637 [2024-04-27 02:45:10.012348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.637 [2024-04-27 02:45:10.012369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.637 [2024-04-27 02:45:10.024375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.637 [2024-04-27 02:45:10.024834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.637 [2024-04-27 02:45:10.024853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.637 [2024-04-27 02:45:10.036945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.637 [2024-04-27 02:45:10.037357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.637 [2024-04-27 02:45:10.037377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.637 [2024-04-27 02:45:10.049492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.637 [2024-04-27 02:45:10.049956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.049975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.062034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.062492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.062511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.074581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.074883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.074902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.087101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.087420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.099662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.100094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.100113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.112250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.112590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.112609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.124753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.125190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.125209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.137432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.137858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.137877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.149948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.150440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.150459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.162471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.162969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.162988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.174983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.175296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.175315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.187522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.187847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.200008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.200466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.200484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.212548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.212852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.212870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.225084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.225400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.225422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.237581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.238003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.238021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.638 [2024-04-27 02:45:10.250111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.638 [2024-04-27 02:45:10.250532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.638 [2024-04-27 02:45:10.250551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.262639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.263131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.275135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.275546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.287591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.287999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.288018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.300062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.300548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.300567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.312588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.313096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.313115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.325040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.325354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.325373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.337523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.337929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.337952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.350033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.350499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.350517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.362551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.363002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.363021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.375069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.375369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.375388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.387540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.387987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.400101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.400406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.400425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.412583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.412888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.412906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.425017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.425388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.425407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.437494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.437899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.437917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.449959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.450415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.450434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.462454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.462872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.462891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.475057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.475457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.475476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.487558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.487978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.487997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.500041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.500449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.500468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:36.900 [2024-04-27 02:45:10.512490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:36.900 [2024-04-27 02:45:10.512903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.900 [2024-04-27 02:45:10.512922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.162 [2024-04-27 02:45:10.525004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.162 [2024-04-27 02:45:10.525294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.162 [2024-04-27 02:45:10.525313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.162 [2024-04-27 02:45:10.537592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.162 [2024-04-27 02:45:10.538032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.162 [2024-04-27 02:45:10.538051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.162 [2024-04-27 02:45:10.550215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.162 [2024-04-27 02:45:10.550613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.162 [2024-04-27 02:45:10.550632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.562675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.563058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.563077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.575133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.575444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.575463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.587610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.587938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.587957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.600102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.600599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.600618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.612563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.612983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.613002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.625014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.625490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.625510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.637521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.637854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.637873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.649978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.650440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.650458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.662523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.662956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.662979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.675023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.675631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.687540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.687963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.687982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.699994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.700555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.700574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.712493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.713052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.713072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.724956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.725268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.725291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.737450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.737884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.737903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.749906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.750381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.750400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.762420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.762720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.762740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.163 [2024-04-27 02:45:10.774909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.163 [2024-04-27 02:45:10.775220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.163 [2024-04-27 02:45:10.775239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.787398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.428 [2024-04-27 02:45:10.787839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.428 [2024-04-27 02:45:10.787859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.799868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.428 [2024-04-27 02:45:10.800209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.428 [2024-04-27 02:45:10.800227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.812334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.428 [2024-04-27 02:45:10.812821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.428 [2024-04-27 02:45:10.812840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.824812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.428 [2024-04-27 02:45:10.825118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.428 [2024-04-27 02:45:10.825137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.837255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.428 [2024-04-27 02:45:10.837724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.428 [2024-04-27 02:45:10.837743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.849807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.428 [2024-04-27 02:45:10.850271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.428 [2024-04-27 02:45:10.850295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.428 [2024-04-27 02:45:10.862260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.862660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.862678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.874738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.875157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.875175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.887183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.887541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.887560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.899691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.900151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.900171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.912157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.912635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.912654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.924625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.925047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.925066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.937131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.937570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.937589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.949608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.950023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.950042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.962081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.962365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.962384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.974559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.974960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.974978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.987014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:10.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:10.987448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:10.999620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:11.000100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:11.000119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:11.012086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:11.012380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:11.012399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:11.024556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:11.025033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:11.025052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.429 [2024-04-27 02:45:11.037063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.429 [2024-04-27 02:45:11.037362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.429 [2024-04-27 02:45:11.037381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.049551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.049876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.049895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.062032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.062439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.062458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.074562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.074960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.074979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.086999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.087371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.087390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.099434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.099740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.099762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.111941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.112334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.112353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.124410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.124734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.124752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.136865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.137281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.137299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.149447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.149844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.149862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.161915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.162378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.162397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.174418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.174838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.174856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.186875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.187343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.187362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.199380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.199673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.199691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.211862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.212301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.212320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.224332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.224808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.224827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.236913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.237365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.237384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.717 [2024-04-27 02:45:11.249454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.717 [2024-04-27 02:45:11.249789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.717 [2024-04-27 02:45:11.249808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.718 [2024-04-27 02:45:11.261944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.718 [2024-04-27 02:45:11.262378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.718 [2024-04-27 02:45:11.262397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.718 [2024-04-27 02:45:11.274418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.718 [2024-04-27 02:45:11.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.718 [2024-04-27 02:45:11.274741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.718 [2024-04-27 02:45:11.286929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.718 [2024-04-27 02:45:11.287228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.718 [2024-04-27 02:45:11.287247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.718 [2024-04-27 02:45:11.299394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.718 [2024-04-27 02:45:11.299868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.718 [2024-04-27 02:45:11.299886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.718 [2024-04-27 02:45:11.311868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.718 [2024-04-27 02:45:11.312292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.718 [2024-04-27 02:45:11.312311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.718 [2024-04-27 02:45:11.324409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.718 [2024-04-27 02:45:11.324934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.718 [2024-04-27 02:45:11.324952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.336895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.337433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.337452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.349466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.349749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.349767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.361898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.362329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.362348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.374422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.374879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.374898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.386899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.387195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.387214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.399440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.399894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.399913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.411914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.412354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.412373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.424396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.424870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.424892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.436944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.437261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.437284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.449429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.449751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.449769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.461939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.462364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.462384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.474501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.474900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.474919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.486980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.487267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.487290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.499487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.499957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.499976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.511905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.512366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.512385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.524390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.524842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.524860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.536890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.537384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.537403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.549572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.549871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.549889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.562064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.562383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.562402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.574590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.574888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.999 [2024-04-27 02:45:11.574908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.999 [2024-04-27 02:45:11.587093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:37.999 [2024-04-27 02:45:11.587418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.000 [2024-04-27 02:45:11.587437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.000 [2024-04-27 02:45:11.599548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.000 [2024-04-27 02:45:11.600018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.000 [2024-04-27 02:45:11.600037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.000 [2024-04-27 02:45:11.612064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.000 [2024-04-27 02:45:11.612556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.000 [2024-04-27 02:45:11.612575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.624591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.625104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.625123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.637075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.637577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.637597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.649537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.649873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.662022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.662457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.662476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.674518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.674943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.674961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.687016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.687463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.699510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.699924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.699943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.712082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.712506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.712526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.724560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.724970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.724989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.737097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.737593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.737612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.749610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.750036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.750055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.762064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.762562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.762581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.774566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.775019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.775038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.787071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.787368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.787388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.799576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.800003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.800022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.812058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.812355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.812374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.824531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.824832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.824851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.837010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.837412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.837431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.849512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.849999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.850018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.862050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.862532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.862554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.261 [2024-04-27 02:45:11.874512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.261 [2024-04-27 02:45:11.874986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.261 [2024-04-27 02:45:11.875004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.887000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.887420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.887439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.899479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.899836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.899856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.912007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.912315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.912333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.924472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.924950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.924969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.936969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.937268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.937291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.949441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.949774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.949793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 [2024-04-27 02:45:11.961877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950660) with pdu=0x2000190fd640 00:25:38.523 [2024-04-27 02:45:11.962299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.523 [2024-04-27 02:45:11.962317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.523 00:25:38.523 Latency(us) 00:25:38.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.523 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:38.523 nvme0n1 : 2.01 20343.25 79.47 0.00 0.00 6277.78 5406.72 16384.00 00:25:38.523 =================================================================================================================== 00:25:38.523 Total : 20343.25 79.47 0.00 0.00 6277.78 5406.72 16384.00 00:25:38.523 0 00:25:38.523 02:45:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:38.523 02:45:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:38.523 02:45:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:38.523 02:45:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:38.523 | .driver_specific 00:25:38.523 | .nvme_error 00:25:38.523 | .status_code 00:25:38.523 | .command_transient_transport_error' 00:25:38.784 02:45:12 -- host/digest.sh@71 -- # (( 160 > 0 )) 00:25:38.784 02:45:12 -- host/digest.sh@73 -- # killprocess 267440 00:25:38.784 02:45:12 -- common/autotest_common.sh@936 -- # '[' -z 267440 ']' 00:25:38.784 02:45:12 -- common/autotest_common.sh@940 -- # kill -0 267440 00:25:38.784 02:45:12 -- common/autotest_common.sh@941 -- # uname 00:25:38.784 02:45:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:38.784 02:45:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 267440 00:25:38.784 02:45:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:38.784 02:45:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:38.784 02:45:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 267440' 00:25:38.784 killing process with pid 267440 00:25:38.784 02:45:12 -- common/autotest_common.sh@955 -- # kill 267440 00:25:38.784 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.784 00:25:38.784 Latency(us) 00:25:38.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.784 =================================================================================================================== 00:25:38.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.784 02:45:12 -- common/autotest_common.sh@960 -- # wait 267440 00:25:38.784 02:45:12 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:38.784 02:45:12 -- host/digest.sh@54 -- # local rw bs qd 00:25:38.784 02:45:12 -- host/digest.sh@56 -- # rw=randwrite 00:25:38.784 02:45:12 -- host/digest.sh@56 -- # bs=131072 00:25:38.784 02:45:12 -- host/digest.sh@56 -- # qd=16 00:25:38.784 02:45:12 -- host/digest.sh@58 -- # bperfpid=268590 00:25:38.784 02:45:12 -- host/digest.sh@60 -- # waitforlisten 268590 /var/tmp/bperf.sock 00:25:38.784 02:45:12 -- common/autotest_common.sh@817 -- # '[' -z 268590 ']' 00:25:38.784 02:45:12 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:38.784 02:45:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.784 02:45:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:38.784 02:45:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.784 02:45:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:38.784 02:45:12 -- common/autotest_common.sh@10 -- # set +x 00:25:38.784 [2024-04-27 02:45:12.381529] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:38.784 [2024-04-27 02:45:12.381585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268590 ] 00:25:38.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.784 Zero copy mechanism will not be used. 00:25:39.046 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.046 [2024-04-27 02:45:12.439061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.046 [2024-04-27 02:45:12.501868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.617 02:45:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:39.617 02:45:13 -- common/autotest_common.sh@850 -- # return 0 00:25:39.617 02:45:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:39.617 02:45:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:39.878 02:45:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:39.878 02:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.878 02:45:13 -- common/autotest_common.sh@10 -- # set +x 00:25:39.878 02:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.878 02:45:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.878 02:45:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.139 nvme0n1 00:25:40.139 02:45:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:40.139 02:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.139 02:45:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.139 02:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.139 02:45:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:40.139 02:45:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:40.139 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:40.139 Zero copy mechanism will not be used. 00:25:40.139 Running I/O for 2 seconds... 00:25:40.139 [2024-04-27 02:45:13.697678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.139 [2024-04-27 02:45:13.698246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-04-27 02:45:13.698285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.139 [2024-04-27 02:45:13.715114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.139 [2024-04-27 02:45:13.715492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-04-27 02:45:13.715516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.139 [2024-04-27 02:45:13.729243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.139 [2024-04-27 02:45:13.729592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-04-27 02:45:13.729613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.139 [2024-04-27 02:45:13.744656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.139 [2024-04-27 02:45:13.744972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-04-27 02:45:13.744993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.760525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.760913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.776792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.777219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.777240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.791351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.791595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.791615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.805497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.805802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.805823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.821341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.821809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.821830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.836140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.836659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.836680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.851674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.851978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.851998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.868181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.868653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.868674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.884518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.884832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.884853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.899216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.899666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.899691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.913531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.913884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.913905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.928654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.929076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.929097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.942867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.943265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.943291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.957819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.958240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.400 [2024-04-27 02:45:13.958261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.400 [2024-04-27 02:45:13.973972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.400 [2024-04-27 02:45:13.974378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.401 [2024-04-27 02:45:13.974400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.401 [2024-04-27 02:45:13.989692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.401 [2024-04-27 02:45:13.990087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.401 [2024-04-27 02:45:13.990108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.401 [2024-04-27 02:45:14.005425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.401 [2024-04-27 02:45:14.005817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.401 [2024-04-27 02:45:14.005837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.661 [2024-04-27 02:45:14.020430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.661 [2024-04-27 02:45:14.020842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.661 [2024-04-27 02:45:14.020862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.661 [2024-04-27 02:45:14.034178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.661 [2024-04-27 02:45:14.034325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.661 [2024-04-27 02:45:14.034344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.661 [2024-04-27 02:45:14.049239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.661 [2024-04-27 02:45:14.049481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.661 [2024-04-27 02:45:14.049501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.661 [2024-04-27 02:45:14.065067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.661 [2024-04-27 02:45:14.065389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.661 [2024-04-27 02:45:14.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.661 [2024-04-27 02:45:14.080804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.081214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.081232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.096005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.096326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.096346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.111016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.111323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.111343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.126390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.126649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.126668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.141365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.141738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.141758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.156238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.156545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.156564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.170114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.170473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.170501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.183050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.183424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.183443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.198876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.199183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.199203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.214919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.215365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.215385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.230664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.231153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.231173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.244496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.244823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.244843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.259452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.259873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.259893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.662 [2024-04-27 02:45:14.275286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.662 [2024-04-27 02:45:14.275769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.662 [2024-04-27 02:45:14.275789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.290690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.291012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.291036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.306486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.306960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.306980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.322599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.322942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.322962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.337393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.337738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.337758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.351855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.352203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.352224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.368308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.368655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.368675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.383704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.384134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.384154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.399199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.399722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.413978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.414440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.414460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.429633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.429936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.429955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.444031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.444460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.444481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.458350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.458551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.458571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.473801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.474387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.474407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.489289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.489911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.489931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.504201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.504574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.504595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.517641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.518164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.924 [2024-04-27 02:45:14.531173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:40.924 [2024-04-27 02:45:14.531656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.924 [2024-04-27 02:45:14.531676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.545331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.545638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.545662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.559429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.559778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.559798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.572623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.573114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.573135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.586943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.587511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.587531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.602209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.602569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.602590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.617080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.617487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.617507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.631351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.631817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.631837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.645780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.646310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.646330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.660389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.660837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.660857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.674583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.675147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.675167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.688534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.688892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.688912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.702849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.703427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.703447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.715929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.716394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.716414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.729865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.730521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.730542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.742477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.742893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.742913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.755973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.756619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.756639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.770446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.770926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.770946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.783160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.783667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.783688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.186 [2024-04-27 02:45:14.797868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.186 [2024-04-27 02:45:14.798589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.186 [2024-04-27 02:45:14.798611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.811652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.812116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.812137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.827678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.828070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.828089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.841362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.841658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.841679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.856324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.856931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.856953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.871075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.871481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.871501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.886356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.886817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.886837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.901650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.902214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.902235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.916353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.916791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.916815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.931131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.931598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.931618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.946323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.946833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.946853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.961511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.961964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.961984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.976929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.977290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.977310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:14.992197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:14.992666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:14.992686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:15.007603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:15.008039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:15.008060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:15.021205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:15.021669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:15.021690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:15.035459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:15.035971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:15.035992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:15.049131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:15.049486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:15.049507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.448 [2024-04-27 02:45:15.063690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.448 [2024-04-27 02:45:15.064106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.448 [2024-04-27 02:45:15.064127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.078515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.078970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.078991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.092940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.093439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.093459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.107366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.107768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.107788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.121960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.122290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.122310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.137403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.137852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.137872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.151446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.151927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.151947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.166549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.166977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.166997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.180948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.181366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.181386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.195653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.196148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.196168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.209767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.210255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.224508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.224920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.710 [2024-04-27 02:45:15.224940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.710 [2024-04-27 02:45:15.238979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.710 [2024-04-27 02:45:15.239368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.239388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.711 [2024-04-27 02:45:15.253533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.711 [2024-04-27 02:45:15.254193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.254213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.711 [2024-04-27 02:45:15.268087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.711 [2024-04-27 02:45:15.268562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.268582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.711 [2024-04-27 02:45:15.282516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.711 [2024-04-27 02:45:15.282828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.282848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.711 [2024-04-27 02:45:15.296697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.711 [2024-04-27 02:45:15.297046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.297073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.711 [2024-04-27 02:45:15.309870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.711 [2024-04-27 02:45:15.310393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.711 [2024-04-27 02:45:15.323367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.711 [2024-04-27 02:45:15.323932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.711 [2024-04-27 02:45:15.323953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.972 [2024-04-27 02:45:15.338423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.338776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.338796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.352375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.352957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.352977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.367147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.367712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.367733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.382461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.382999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.397099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.397490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.397511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.411783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.412082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.412102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.426230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.426608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.439378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.439853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.439873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.454516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.454900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.454920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.466535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.466830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.466850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.480793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.481185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.496220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.496744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.496765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.512765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.513381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.513401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.527357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.527844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.527865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.542819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.543293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.543316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.558555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.559063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.559084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.573449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.573951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.573971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.973 [2024-04-27 02:45:15.587432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:41.973 [2024-04-27 02:45:15.587842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.973 [2024-04-27 02:45:15.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.234 [2024-04-27 02:45:15.602597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:42.234 [2024-04-27 02:45:15.602974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.234 [2024-04-27 02:45:15.602994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.234 [2024-04-27 02:45:15.616959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:42.234 [2024-04-27 02:45:15.617496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.234 [2024-04-27 02:45:15.617515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.234 [2024-04-27 02:45:15.631812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:42.234 [2024-04-27 02:45:15.632159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.234 [2024-04-27 02:45:15.632179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.234 [2024-04-27 02:45:15.646415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:42.234 [2024-04-27 02:45:15.646836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.234 [2024-04-27 02:45:15.646856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.234 [2024-04-27 02:45:15.660414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950ad0) with pdu=0x2000190fef90 00:25:42.234 [2024-04-27 02:45:15.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.234 [2024-04-27 02:45:15.660943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.234 00:25:42.234 Latency(us) 00:25:42.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.234 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:42.234 nvme0n1 : 2.01 2081.88 260.24 0.00 0.00 7668.78 5543.25 25995.95 00:25:42.234 =================================================================================================================== 00:25:42.234 Total : 2081.88 260.24 0.00 0.00 7668.78 5543.25 25995.95 00:25:42.234 0 00:25:42.234 02:45:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:42.234 02:45:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:42.234 02:45:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:42.234 | .driver_specific 00:25:42.234 | .nvme_error 00:25:42.234 | .status_code 00:25:42.234 | .command_transient_transport_error' 00:25:42.234 02:45:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:42.234 02:45:15 -- host/digest.sh@71 -- # (( 134 > 0 )) 00:25:42.234 02:45:15 -- host/digest.sh@73 -- # killprocess 268590 00:25:42.234 02:45:15 -- common/autotest_common.sh@936 -- # '[' -z 268590 ']' 00:25:42.496 02:45:15 -- common/autotest_common.sh@940 -- # kill -0 268590 00:25:42.496 02:45:15 -- common/autotest_common.sh@941 -- # uname 00:25:42.496 02:45:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.496 02:45:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 268590 00:25:42.496 02:45:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:42.496 02:45:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:42.496 02:45:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 268590' 00:25:42.496 killing process with pid 268590 00:25:42.496 02:45:15 -- common/autotest_common.sh@955 -- # kill 268590 00:25:42.496 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.496 00:25:42.496 Latency(us) 00:25:42.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.496 =================================================================================================================== 00:25:42.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.496 02:45:15 -- common/autotest_common.sh@960 -- # wait 268590 00:25:42.496 02:45:16 -- host/digest.sh@116 -- # killprocess 265635 00:25:42.496 02:45:16 -- common/autotest_common.sh@936 -- # '[' -z 265635 ']' 00:25:42.496 02:45:16 -- common/autotest_common.sh@940 -- # kill -0 265635 00:25:42.496 02:45:16 -- common/autotest_common.sh@941 -- # uname 00:25:42.496 02:45:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.496 02:45:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 265635 00:25:42.496 02:45:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:42.496 02:45:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:42.496 02:45:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 265635' 00:25:42.496 killing process with pid 265635 00:25:42.496 02:45:16 -- common/autotest_common.sh@955 -- # kill 265635 00:25:42.496 02:45:16 -- common/autotest_common.sh@960 -- # wait 265635 00:25:42.758 00:25:42.758 real 0m16.171s 00:25:42.758 user 0m32.361s 00:25:42.758 sys 0m2.768s 00:25:42.758 02:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:42.758 02:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:42.758 ************************************ 00:25:42.758 END TEST nvmf_digest_error 00:25:42.758 ************************************ 00:25:42.758 02:45:16 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:42.758 02:45:16 -- host/digest.sh@150 -- # nvmftestfini 00:25:42.758 02:45:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:42.758 02:45:16 -- nvmf/common.sh@117 -- # sync 00:25:42.758 02:45:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.758 02:45:16 -- nvmf/common.sh@120 -- # set +e 00:25:42.758 02:45:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.758 02:45:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.758 rmmod nvme_tcp 00:25:42.758 rmmod nvme_fabrics 00:25:42.758 rmmod nvme_keyring 00:25:42.758 02:45:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.758 02:45:16 -- nvmf/common.sh@124 -- # set -e 00:25:42.758 02:45:16 -- nvmf/common.sh@125 -- # return 0 00:25:42.758 02:45:16 -- nvmf/common.sh@478 -- # '[' -n 265635 ']' 00:25:42.758 02:45:16 -- nvmf/common.sh@479 -- # killprocess 265635 00:25:42.758 02:45:16 -- common/autotest_common.sh@936 -- # '[' -z 265635 ']' 00:25:42.758 02:45:16 -- common/autotest_common.sh@940 -- # kill -0 265635 00:25:42.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (265635) - No such process 00:25:42.758 02:45:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 265635 is not found' 00:25:42.758 Process with pid 265635 is not found 00:25:42.758 02:45:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:42.758 02:45:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:42.758 02:45:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:42.758 02:45:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.758 02:45:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.758 02:45:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.758 02:45:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.758 02:45:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.308 02:45:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.308 00:25:45.308 real 0m40.839s 00:25:45.308 user 1m4.107s 00:25:45.308 sys 0m11.027s 00:25:45.308 02:45:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:45.308 02:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.308 ************************************ 00:25:45.308 END TEST nvmf_digest 00:25:45.308 ************************************ 00:25:45.308 02:45:18 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:25:45.308 02:45:18 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:25:45.308 02:45:18 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:25:45.308 02:45:18 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:45.308 02:45:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:45.308 02:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.308 02:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.308 ************************************ 00:25:45.308 START TEST nvmf_bdevperf 00:25:45.308 ************************************ 00:25:45.308 02:45:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:45.308 * Looking for test storage... 00:25:45.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.308 02:45:18 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.308 02:45:18 -- nvmf/common.sh@7 -- # uname -s 00:25:45.308 02:45:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.308 02:45:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.308 02:45:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.308 02:45:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.308 02:45:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.308 02:45:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.308 02:45:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.308 02:45:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.308 02:45:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.308 02:45:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.308 02:45:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.308 02:45:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.308 02:45:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.309 02:45:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.309 02:45:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.309 02:45:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.309 02:45:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.309 02:45:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.309 02:45:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.309 02:45:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.309 02:45:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.309 02:45:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.309 02:45:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.309 02:45:18 -- paths/export.sh@5 -- # export PATH 00:25:45.309 02:45:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.309 02:45:18 -- nvmf/common.sh@47 -- # : 0 00:25:45.309 02:45:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.309 02:45:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.309 02:45:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.309 02:45:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.309 02:45:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.309 02:45:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.309 02:45:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.309 02:45:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.309 02:45:18 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:45.309 02:45:18 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:45.309 02:45:18 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:45.309 02:45:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:45.309 02:45:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.309 02:45:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:45.309 02:45:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:45.309 02:45:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:45.309 02:45:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.309 02:45:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.309 02:45:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.309 02:45:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:45.309 02:45:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:45.309 02:45:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.309 02:45:18 -- common/autotest_common.sh@10 -- # set +x 00:25:51.927 02:45:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:51.927 02:45:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.927 02:45:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.927 02:45:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.927 02:45:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.927 02:45:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.927 02:45:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.927 02:45:25 -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.927 02:45:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.927 02:45:25 -- nvmf/common.sh@296 -- # e810=() 00:25:51.927 02:45:25 -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.927 02:45:25 -- nvmf/common.sh@297 -- # x722=() 00:25:51.927 02:45:25 -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.927 02:45:25 -- nvmf/common.sh@298 -- # mlx=() 00:25:51.927 02:45:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.927 02:45:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.927 02:45:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.927 02:45:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.927 02:45:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.927 02:45:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.927 02:45:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:51.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:51.927 02:45:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.927 02:45:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:51.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:51.927 02:45:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.927 02:45:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.928 02:45:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.928 02:45:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.928 02:45:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.928 02:45:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.928 02:45:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:51.928 02:45:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.928 02:45:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:51.928 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:51.928 02:45:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.928 02:45:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.928 02:45:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.928 02:45:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:51.928 02:45:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.928 02:45:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:51.928 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:51.928 02:45:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.928 02:45:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:51.928 02:45:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:51.928 02:45:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:51.928 02:45:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:51.928 02:45:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:51.928 02:45:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.928 02:45:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.928 02:45:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.928 02:45:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.928 02:45:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.928 02:45:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.928 02:45:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.928 02:45:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.928 02:45:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.928 02:45:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.928 02:45:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.928 02:45:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.928 02:45:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.928 02:45:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.928 02:45:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.928 02:45:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.928 02:45:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.189 02:45:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.189 02:45:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.189 02:45:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:52.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:25:52.189 00:25:52.189 --- 10.0.0.2 ping statistics --- 00:25:52.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.189 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:25:52.189 02:45:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:25:52.189 00:25:52.189 --- 10.0.0.1 ping statistics --- 00:25:52.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.189 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:25:52.189 02:45:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.189 02:45:25 -- nvmf/common.sh@411 -- # return 0 00:25:52.189 02:45:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:52.189 02:45:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.189 02:45:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:52.189 02:45:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:52.189 02:45:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.189 02:45:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:52.189 02:45:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:52.189 02:45:25 -- host/bdevperf.sh@25 -- # tgt_init 00:25:52.189 02:45:25 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:52.189 02:45:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:52.189 02:45:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:52.189 02:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.189 02:45:25 -- nvmf/common.sh@470 -- # nvmfpid=273403 00:25:52.189 02:45:25 -- nvmf/common.sh@471 -- # waitforlisten 273403 00:25:52.189 02:45:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:52.189 02:45:25 -- common/autotest_common.sh@817 -- # '[' -z 273403 ']' 00:25:52.189 02:45:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.189 02:45:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:52.189 02:45:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.189 02:45:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:52.189 02:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.189 [2024-04-27 02:45:25.723049] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:52.189 [2024-04-27 02:45:25.723112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.189 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.189 [2024-04-27 02:45:25.794544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:52.450 [2024-04-27 02:45:25.867366] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.450 [2024-04-27 02:45:25.867406] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.450 [2024-04-27 02:45:25.867414] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.450 [2024-04-27 02:45:25.867420] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.450 [2024-04-27 02:45:25.867430] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.450 [2024-04-27 02:45:25.867706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.450 [2024-04-27 02:45:25.867796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.450 [2024-04-27 02:45:25.867844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.020 02:45:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:53.020 02:45:26 -- common/autotest_common.sh@850 -- # return 0 00:25:53.020 02:45:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:53.020 02:45:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:53.020 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.020 02:45:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.020 02:45:26 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.020 02:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.020 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.020 [2024-04-27 02:45:26.543843] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.020 02:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.020 02:45:26 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:53.020 02:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.020 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.020 Malloc0 00:25:53.020 02:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.020 02:45:26 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:53.020 02:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.020 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.020 02:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.020 02:45:26 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:53.020 02:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.020 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.020 02:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.020 02:45:26 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.020 02:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.020 02:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.020 [2024-04-27 02:45:26.616591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.020 02:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.020 02:45:26 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:53.020 02:45:26 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:53.020 02:45:26 -- nvmf/common.sh@521 -- # config=() 00:25:53.020 02:45:26 -- nvmf/common.sh@521 -- # local subsystem config 00:25:53.020 02:45:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:53.020 02:45:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:53.020 { 00:25:53.020 "params": { 00:25:53.020 "name": "Nvme$subsystem", 00:25:53.020 "trtype": "$TEST_TRANSPORT", 00:25:53.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.020 "adrfam": "ipv4", 00:25:53.020 "trsvcid": "$NVMF_PORT", 00:25:53.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.020 "hdgst": ${hdgst:-false}, 00:25:53.020 "ddgst": ${ddgst:-false} 00:25:53.020 }, 00:25:53.020 "method": "bdev_nvme_attach_controller" 00:25:53.020 } 00:25:53.020 EOF 00:25:53.020 )") 00:25:53.020 02:45:26 -- nvmf/common.sh@543 -- # cat 00:25:53.020 02:45:26 -- nvmf/common.sh@545 -- # jq . 00:25:53.020 02:45:26 -- nvmf/common.sh@546 -- # IFS=, 00:25:53.020 02:45:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:53.020 "params": { 00:25:53.020 "name": "Nvme1", 00:25:53.020 "trtype": "tcp", 00:25:53.020 "traddr": "10.0.0.2", 00:25:53.020 "adrfam": "ipv4", 00:25:53.020 "trsvcid": "4420", 00:25:53.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.020 "hdgst": false, 00:25:53.020 "ddgst": false 00:25:53.020 }, 00:25:53.020 "method": "bdev_nvme_attach_controller" 00:25:53.020 }' 00:25:53.280 [2024-04-27 02:45:26.668918] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:53.280 [2024-04-27 02:45:26.668968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273635 ] 00:25:53.280 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.280 [2024-04-27 02:45:26.727633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.280 [2024-04-27 02:45:26.790913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.540 Running I/O for 1 seconds... 00:25:54.926 00:25:54.926 Latency(us) 00:25:54.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.927 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:54.927 Verification LBA range: start 0x0 length 0x4000 00:25:54.927 Nvme1n1 : 1.01 9344.19 36.50 0.00 0.00 13611.78 1856.85 21408.43 00:25:54.927 =================================================================================================================== 00:25:54.927 Total : 9344.19 36.50 0.00 0.00 13611.78 1856.85 21408.43 00:25:54.927 02:45:28 -- host/bdevperf.sh@30 -- # bdevperfpid=273976 00:25:54.927 02:45:28 -- host/bdevperf.sh@32 -- # sleep 3 00:25:54.927 02:45:28 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:54.927 02:45:28 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:54.927 02:45:28 -- nvmf/common.sh@521 -- # config=() 00:25:54.927 02:45:28 -- nvmf/common.sh@521 -- # local subsystem config 00:25:54.927 02:45:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:54.927 02:45:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:54.927 { 00:25:54.927 "params": { 00:25:54.927 "name": "Nvme$subsystem", 00:25:54.927 "trtype": "$TEST_TRANSPORT", 00:25:54.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.927 "adrfam": "ipv4", 00:25:54.927 "trsvcid": "$NVMF_PORT", 00:25:54.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.927 "hdgst": ${hdgst:-false}, 00:25:54.927 "ddgst": ${ddgst:-false} 00:25:54.927 }, 00:25:54.927 "method": "bdev_nvme_attach_controller" 00:25:54.927 } 00:25:54.927 EOF 00:25:54.927 )") 00:25:54.927 02:45:28 -- nvmf/common.sh@543 -- # cat 00:25:54.927 02:45:28 -- nvmf/common.sh@545 -- # jq . 00:25:54.927 02:45:28 -- nvmf/common.sh@546 -- # IFS=, 00:25:54.927 02:45:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:54.927 "params": { 00:25:54.927 "name": "Nvme1", 00:25:54.927 "trtype": "tcp", 00:25:54.927 "traddr": "10.0.0.2", 00:25:54.927 "adrfam": "ipv4", 00:25:54.927 "trsvcid": "4420", 00:25:54.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.927 "hdgst": false, 00:25:54.927 "ddgst": false 00:25:54.927 }, 00:25:54.927 "method": "bdev_nvme_attach_controller" 00:25:54.927 }' 00:25:54.927 [2024-04-27 02:45:28.299545] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:54.927 [2024-04-27 02:45:28.299620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273976 ] 00:25:54.927 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.927 [2024-04-27 02:45:28.359594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.927 [2024-04-27 02:45:28.420101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.188 Running I/O for 15 seconds... 00:25:57.740 02:45:31 -- host/bdevperf.sh@33 -- # kill -9 273403 00:25:57.740 02:45:31 -- host/bdevperf.sh@35 -- # sleep 3 00:25:57.740 [2024-04-27 02:45:31.258822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.258866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.258888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.258898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.258910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.258928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.258940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.258950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.258960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.258968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.258980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.258988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.258998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.741 [2024-04-27 02:45:31.259590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.741 [2024-04-27 02:45:31.259600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.259984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.259994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.742 [2024-04-27 02:45:31.260246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.742 [2024-04-27 02:45:31.260255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.743 [2024-04-27 02:45:31.260527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.743 [2024-04-27 02:45:31.260906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.743 [2024-04-27 02:45:31.260915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.260922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.260931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.260938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.260947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.260954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.260963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.260971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.260979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.260987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.260996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.261003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.261019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.744 [2024-04-27 02:45:31.261035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc880 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.261053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:57.744 [2024-04-27 02:45:31.261059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:57.744 [2024-04-27 02:45:31.261066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44472 len:8 PRP1 0x0 PRP2 0x0 00:25:57.744 [2024-04-27 02:45:31.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261113] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18fc880 was disconnected and freed. reset controller. 00:25:57.744 [2024-04-27 02:45:31.261159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.744 [2024-04-27 02:45:31.261169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.744 [2024-04-27 02:45:31.261184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.744 [2024-04-27 02:45:31.261199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.744 [2024-04-27 02:45:31.261214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.744 [2024-04-27 02:45:31.261221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.264818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.744 [2024-04-27 02:45:31.264841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.265737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.266263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.266275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.744 [2024-04-27 02:45:31.266295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.266533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.266754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.744 [2024-04-27 02:45:31.266762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.744 [2024-04-27 02:45:31.266771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.744 [2024-04-27 02:45:31.270302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.744 [2024-04-27 02:45:31.278821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.744 [2024-04-27 02:45:31.279657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.280138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.280150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.744 [2024-04-27 02:45:31.280159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.280403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.280624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.744 [2024-04-27 02:45:31.280637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.744 [2024-04-27 02:45:31.280645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.744 [2024-04-27 02:45:31.284172] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.744 [2024-04-27 02:45:31.292699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.744 [2024-04-27 02:45:31.293267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.293872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.293909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.744 [2024-04-27 02:45:31.293919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.294156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.294386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.744 [2024-04-27 02:45:31.294396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.744 [2024-04-27 02:45:31.294405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.744 [2024-04-27 02:45:31.297935] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.744 [2024-04-27 02:45:31.306684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.744 [2024-04-27 02:45:31.307413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.307911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.307924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.744 [2024-04-27 02:45:31.307933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.308170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.308398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.744 [2024-04-27 02:45:31.308406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.744 [2024-04-27 02:45:31.308414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.744 [2024-04-27 02:45:31.311937] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.744 [2024-04-27 02:45:31.320457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.744 [2024-04-27 02:45:31.321149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.321699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.321736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.744 [2024-04-27 02:45:31.321747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.321983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.322204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.744 [2024-04-27 02:45:31.322212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.744 [2024-04-27 02:45:31.322225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.744 [2024-04-27 02:45:31.325761] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.744 [2024-04-27 02:45:31.334290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.744 [2024-04-27 02:45:31.334945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.335433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.744 [2024-04-27 02:45:31.335456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.744 [2024-04-27 02:45:31.335466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.744 [2024-04-27 02:45:31.335703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.744 [2024-04-27 02:45:31.335923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.744 [2024-04-27 02:45:31.335931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.744 [2024-04-27 02:45:31.335938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.745 [2024-04-27 02:45:31.339477] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.745 [2024-04-27 02:45:31.348213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.745 [2024-04-27 02:45:31.349039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.745 [2024-04-27 02:45:31.349523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.745 [2024-04-27 02:45:31.349537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:57.745 [2024-04-27 02:45:31.349547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:57.745 [2024-04-27 02:45:31.349784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:57.745 [2024-04-27 02:45:31.350004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.745 [2024-04-27 02:45:31.350013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:57.745 [2024-04-27 02:45:31.350020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.745 [2024-04-27 02:45:31.353550] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.007 [2024-04-27 02:45:31.362062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.007 [2024-04-27 02:45:31.362737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-04-27 02:45:31.363198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-04-27 02:45:31.363208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.007 [2024-04-27 02:45:31.363215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.363439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.363657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.363665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.363672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.367196] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.375918] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.376694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.377174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.377186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.377195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.377439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.377660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.377668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.377675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.381196] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.389709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.390524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.391013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.391025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.391035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.391271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.391499] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.391508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.391515] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.395035] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.403549] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.404142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.404700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.404737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.404747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.404984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.405204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.405212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.405220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.408759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.417486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.418297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.418697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.418709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.418719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.418955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.419175] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.419184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.419191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.422721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.431436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.432222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.432715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.432728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.432737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.432974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.433194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.433203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.433210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.436737] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.445256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.446078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.446599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.446614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.446623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.446859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.447080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.447088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.447095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.450629] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.459326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.460183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.460671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.460685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.460694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.460931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.461151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.461159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.461166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.464695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.473204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.473984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.474466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.474480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.474490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.474726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.474947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.474955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.474962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.008 [2024-04-27 02:45:31.478491] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.008 [2024-04-27 02:45:31.487015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.008 [2024-04-27 02:45:31.487773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.488253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-04-27 02:45:31.488266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.008 [2024-04-27 02:45:31.488275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.008 [2024-04-27 02:45:31.488518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.008 [2024-04-27 02:45:31.488739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.008 [2024-04-27 02:45:31.488747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.008 [2024-04-27 02:45:31.488754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.492299] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.500814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.501623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.502109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.502126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.502136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.502380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.502602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.502610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.502617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.506147] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.514663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.515377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.515789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.515802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.515811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.516049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.516269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.516285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.516293] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.519817] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.528539] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.529224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.529693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.529704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.529712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.529930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.530147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.530155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.530161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.533683] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.542403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.542883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.543263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.543274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.543292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.543513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.543732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.543739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.543745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.547478] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.556207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.556971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.557539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.557576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.557586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.557823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.558044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.558052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.558059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.561585] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.570094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.570922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.571494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.571531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.571541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.571778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.571999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.572007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.572014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.575547] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.584070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.584842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.585180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.585193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.585202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.585451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.585673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.585681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.585688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.589210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.597924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.598585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.599070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.599083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.599092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.599337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.599558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.599567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.599574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.603094] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.009 [2024-04-27 02:45:31.611821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.009 [2024-04-27 02:45:31.612597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.613101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-04-27 02:45:31.613114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.009 [2024-04-27 02:45:31.613123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.009 [2024-04-27 02:45:31.613367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.009 [2024-04-27 02:45:31.613588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.009 [2024-04-27 02:45:31.613596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.009 [2024-04-27 02:45:31.613603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.009 [2024-04-27 02:45:31.617129] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.272 [2024-04-27 02:45:31.625650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.272 [2024-04-27 02:45:31.626432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.272 [2024-04-27 02:45:31.626829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.272 [2024-04-27 02:45:31.626842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.272 [2024-04-27 02:45:31.626852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.272 [2024-04-27 02:45:31.627089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.272 [2024-04-27 02:45:31.627322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.272 [2024-04-27 02:45:31.627332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.272 [2024-04-27 02:45:31.627339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.272 [2024-04-27 02:45:31.630861] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.639580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.640267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.640735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.640745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.640752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.640971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.641188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.641195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.641202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.644724] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.653444] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.654207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.654705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.654719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.654728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.654964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.655185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.655193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.655201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.658731] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.667272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.668022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.668510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.668524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.668533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.668769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.668990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.669002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.669009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.672538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.681056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.681834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.682322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.682336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.682346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.682582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.682803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.682811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.682818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.686344] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.694862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.695551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.695881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.695891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.695899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.696118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.696341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.696350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.696356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.699882] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.708840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.709570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.710049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.710062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.710072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.710317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.710538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.710546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.710558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.714082] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.722623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.723360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.723845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.723858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.723867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.724104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.724331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.724340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.724347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.727875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.736397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.737213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.737690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.737704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.737713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.737950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.738170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.738178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.738185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.741719] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.750245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.751073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.751562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.751576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.751585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.273 [2024-04-27 02:45:31.751822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.273 [2024-04-27 02:45:31.752042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.273 [2024-04-27 02:45:31.752050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.273 [2024-04-27 02:45:31.752057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.273 [2024-04-27 02:45:31.755597] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.273 [2024-04-27 02:45:31.764126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.273 [2024-04-27 02:45:31.764792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.765293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.273 [2024-04-27 02:45:31.765307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.273 [2024-04-27 02:45:31.765317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.765553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.765773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.765782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.765789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.769323] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.778060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.778858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.779494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.779530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.779541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.779778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.779999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.780007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.780014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.783542] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.791854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.792428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.792910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.792924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.792933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.793170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.793399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.793407] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.793414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.796936] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.805670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.806537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.806889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.806903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.806913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.807149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.807377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.807386] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.807393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.810918] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.819437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.820190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.820707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.820721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.820731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.820967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.821187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.821195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.821203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.824732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.833243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.834059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.834437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.834451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.834460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.834696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.834917] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.834925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.834932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.838459] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.847178] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.847961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.848442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.848456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.848465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.848702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.848922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.848930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.848937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.852466] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.860991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.861489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.861852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.861862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.861869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.862088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.862308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.862316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.862323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.865842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.874781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.875552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.876036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.876049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.876058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.876301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.876522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.876530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.274 [2024-04-27 02:45:31.876537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.274 [2024-04-27 02:45:31.880061] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.274 [2024-04-27 02:45:31.888577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.274 [2024-04-27 02:45:31.889348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.889836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.274 [2024-04-27 02:45:31.889854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.274 [2024-04-27 02:45:31.889863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.274 [2024-04-27 02:45:31.890099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.274 [2024-04-27 02:45:31.890329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.274 [2024-04-27 02:45:31.890338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.275 [2024-04-27 02:45:31.890345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.537 [2024-04-27 02:45:31.893868] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.537 [2024-04-27 02:45:31.902383] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.537 [2024-04-27 02:45:31.903171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.903566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.903580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.537 [2024-04-27 02:45:31.903590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.537 [2024-04-27 02:45:31.903826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.537 [2024-04-27 02:45:31.904046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.537 [2024-04-27 02:45:31.904054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.537 [2024-04-27 02:45:31.904061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.537 [2024-04-27 02:45:31.907600] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.537 [2024-04-27 02:45:31.916322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.537 [2024-04-27 02:45:31.917135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.917613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.917627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.537 [2024-04-27 02:45:31.917637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.537 [2024-04-27 02:45:31.917874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.537 [2024-04-27 02:45:31.918094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.537 [2024-04-27 02:45:31.918102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.537 [2024-04-27 02:45:31.918109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.537 [2024-04-27 02:45:31.921639] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.537 [2024-04-27 02:45:31.930153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.537 [2024-04-27 02:45:31.930903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.931385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.931398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.537 [2024-04-27 02:45:31.931412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.537 [2024-04-27 02:45:31.931648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.537 [2024-04-27 02:45:31.931869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.537 [2024-04-27 02:45:31.931877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.537 [2024-04-27 02:45:31.931884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.537 [2024-04-27 02:45:31.935411] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.537 [2024-04-27 02:45:31.943919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.537 [2024-04-27 02:45:31.944730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.945213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.945226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.537 [2024-04-27 02:45:31.945235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.537 [2024-04-27 02:45:31.945480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.537 [2024-04-27 02:45:31.945701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.537 [2024-04-27 02:45:31.945709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.537 [2024-04-27 02:45:31.945716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.537 [2024-04-27 02:45:31.949239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.537 [2024-04-27 02:45:31.957756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.537 [2024-04-27 02:45:31.958578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-04-27 02:45:31.959058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:31.959070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:31.959080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:31.959324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:31.959545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:31.959553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:31.959560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:31.963081] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:31.971634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:31.972480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:31.972837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:31.972850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:31.972859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:31.973100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:31.973328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:31.973337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:31.973345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:31.976870] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:31.985591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:31.986407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:31.986896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:31.986909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:31.986918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:31.987154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:31.987383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:31.987392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:31.987399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:31.990922] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:31.999441] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.000220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.000694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.000707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.000717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.000953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.001174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:32.001182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:32.001189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:32.004717] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:32.013238] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.013842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.014322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.014338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.014348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.014584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.014809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:32.014818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:32.014826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:32.018353] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:32.027072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.027860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.028390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.028405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.028414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.028651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.028872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:32.028880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:32.028888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:32.032415] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:32.040927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.041723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.042206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.042219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.042228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.042471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.042692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:32.042701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:32.042708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:32.046230] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:32.054744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.055498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.055972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.055984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.055993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.056230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.056458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:32.056474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:32.056482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:32.060006] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:32.068532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.069218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.069778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.069814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.069825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.070061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.070288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.538 [2024-04-27 02:45:32.070297] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.538 [2024-04-27 02:45:32.070305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.538 [2024-04-27 02:45:32.073828] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.538 [2024-04-27 02:45:32.082345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.538 [2024-04-27 02:45:32.083038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.083580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-04-27 02:45:32.083617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.538 [2024-04-27 02:45:32.083627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.538 [2024-04-27 02:45:32.083864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.538 [2024-04-27 02:45:32.084085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.539 [2024-04-27 02:45:32.084093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.539 [2024-04-27 02:45:32.084100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.539 [2024-04-27 02:45:32.087632] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.539 [2024-04-27 02:45:32.096147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.539 [2024-04-27 02:45:32.096880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.097237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.097247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.539 [2024-04-27 02:45:32.097255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.539 [2024-04-27 02:45:32.097478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.539 [2024-04-27 02:45:32.097696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.539 [2024-04-27 02:45:32.097703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.539 [2024-04-27 02:45:32.097715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.539 [2024-04-27 02:45:32.101239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.539 [2024-04-27 02:45:32.109964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.539 [2024-04-27 02:45:32.110719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.111201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.111214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.539 [2024-04-27 02:45:32.111223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.539 [2024-04-27 02:45:32.111468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.539 [2024-04-27 02:45:32.111689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.539 [2024-04-27 02:45:32.111697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.539 [2024-04-27 02:45:32.111705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.539 [2024-04-27 02:45:32.115229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.539 [2024-04-27 02:45:32.123740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.539 [2024-04-27 02:45:32.124552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.125036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.125048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.539 [2024-04-27 02:45:32.125057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.539 [2024-04-27 02:45:32.125302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.539 [2024-04-27 02:45:32.125523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.539 [2024-04-27 02:45:32.125531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.539 [2024-04-27 02:45:32.125538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.539 [2024-04-27 02:45:32.129061] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.539 [2024-04-27 02:45:32.137575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.539 [2024-04-27 02:45:32.138347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.138815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.138828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.539 [2024-04-27 02:45:32.138837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.539 [2024-04-27 02:45:32.139073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.539 [2024-04-27 02:45:32.139301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.539 [2024-04-27 02:45:32.139310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.539 [2024-04-27 02:45:32.139317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.539 [2024-04-27 02:45:32.142846] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.539 [2024-04-27 02:45:32.151362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.539 [2024-04-27 02:45:32.152155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.152667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-04-27 02:45:32.152682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.539 [2024-04-27 02:45:32.152691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.539 [2024-04-27 02:45:32.152928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.539 [2024-04-27 02:45:32.153148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.539 [2024-04-27 02:45:32.153157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.539 [2024-04-27 02:45:32.153164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.156692] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.165201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.166034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.166397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.166410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.166420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.166656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.166876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.166885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.166892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.170418] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.179138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.179923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.180397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.180411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.180420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.180657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.180878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.180886] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.180893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.184419] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.192936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.193684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.194164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.194177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.194186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.194430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.194652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.194660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.194667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.198187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.206709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.207507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.207905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.207918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.207927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.208163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.208392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.208400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.208408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.211931] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.220646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.221360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.221902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.221915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.221924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.222160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.222389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.222397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.222405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.225927] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.234449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.235228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.235740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.235754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.235763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.235999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.236220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.236228] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.236235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.239762] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.248274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.249066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.249430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.249444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.249453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.802 [2024-04-27 02:45:32.249689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.802 [2024-04-27 02:45:32.249910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.802 [2024-04-27 02:45:32.249919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.802 [2024-04-27 02:45:32.249926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.802 [2024-04-27 02:45:32.253455] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.802 [2024-04-27 02:45:32.262172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.802 [2024-04-27 02:45:32.262851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.263309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.802 [2024-04-27 02:45:32.263321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.802 [2024-04-27 02:45:32.263328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.263547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.263764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.263773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.263780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.267301] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.276019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.276704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.277167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.277177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.277184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.277405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.277623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.277630] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.277637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.281157] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.289897] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.290584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.291044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.291053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.291061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.291283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.291501] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.291509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.291516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.295045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.303783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.304510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.305019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.305032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.305041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.305291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.305512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.305520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.305527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.309067] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.317603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.318146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.318706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.318743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.318758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.318994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.319215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.319223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.319230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.322768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.331514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.332250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.332777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.332814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.332825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.333061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.333290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.333298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.333306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.336837] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.345446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.346205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.346771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.346786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.346795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.347032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.347252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.347261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.347268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.350803] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.359340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.360091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.360580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.360596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.360605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.360846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.361068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.361076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.361083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.364625] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.373166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.373864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.374321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.374331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.374340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.374558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.374776] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.374784] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.374791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.803 [2024-04-27 02:45:32.378319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.803 [2024-04-27 02:45:32.387049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.803 [2024-04-27 02:45:32.387756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.388208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.803 [2024-04-27 02:45:32.388217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.803 [2024-04-27 02:45:32.388224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.803 [2024-04-27 02:45:32.388446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.803 [2024-04-27 02:45:32.388664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.803 [2024-04-27 02:45:32.388671] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.803 [2024-04-27 02:45:32.388678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.804 [2024-04-27 02:45:32.392202] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.804 [2024-04-27 02:45:32.400949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.804 [2024-04-27 02:45:32.401534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.804 [2024-04-27 02:45:32.401986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.804 [2024-04-27 02:45:32.401995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.804 [2024-04-27 02:45:32.402003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.804 [2024-04-27 02:45:32.402220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.804 [2024-04-27 02:45:32.402445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.804 [2024-04-27 02:45:32.402454] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.804 [2024-04-27 02:45:32.402460] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:58.804 [2024-04-27 02:45:32.405999] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:58.804 [2024-04-27 02:45:32.414738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.804 [2024-04-27 02:45:32.415422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.804 [2024-04-27 02:45:32.415875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.804 [2024-04-27 02:45:32.415884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:58.804 [2024-04-27 02:45:32.415891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:58.804 [2024-04-27 02:45:32.416109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:58.804 [2024-04-27 02:45:32.416331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:58.804 [2024-04-27 02:45:32.416340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:58.804 [2024-04-27 02:45:32.416347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.066 [2024-04-27 02:45:32.419878] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.066 [2024-04-27 02:45:32.428620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.066 [2024-04-27 02:45:32.429345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.066 [2024-04-27 02:45:32.429512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.066 [2024-04-27 02:45:32.429522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.066 [2024-04-27 02:45:32.429529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.066 [2024-04-27 02:45:32.429747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.066 [2024-04-27 02:45:32.429965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.429972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.429979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.433517] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.442467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.443148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.443632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.443642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.443650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.443868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.444085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.444096] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.444103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.447714] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.456257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.456993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.457454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.457464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.457471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.457689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.457905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.457914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.457920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.461453] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.470195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.470801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.471263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.471272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.471287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.471509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.471727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.471742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.471749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.475269] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.484157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.484930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.485313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.485332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.485339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.485561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.485778] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.485787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.485797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.489332] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.498070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.498810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.499269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.499287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.499299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.499522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.499740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.499748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.499756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.503311] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.511865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.512622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.513100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.513112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.513122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.513364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.513585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.513593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.513601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.517138] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.525680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.526504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.526996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.527009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.527018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.527254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.527489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.527499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.527507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.531038] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.539584] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.540284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.540837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.540874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.540884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.541121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.541350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.541359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.541367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.544902] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.553439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.067 [2024-04-27 02:45:32.554010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.554607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.067 [2024-04-27 02:45:32.554644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.067 [2024-04-27 02:45:32.554655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.067 [2024-04-27 02:45:32.554892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.067 [2024-04-27 02:45:32.555113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.067 [2024-04-27 02:45:32.555121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.067 [2024-04-27 02:45:32.555129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.067 [2024-04-27 02:45:32.558671] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.067 [2024-04-27 02:45:32.567418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.568113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.568603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.568640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.568652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.568891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.569112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.569121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.569129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.572669] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.581212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.581943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.582494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.582531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.582541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.582778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.582999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.583007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.583015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.586556] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.595094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.595706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.596938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.596962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.596971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.597197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.597423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.597431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.597439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.600969] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.608891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.609686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.610166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.610180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.610189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.610430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.610652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.610660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.610667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.614223] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.622768] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.623591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.623981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.623995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.624005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.624241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.624469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.624477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.624485] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.628023] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.636562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.637318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.637803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.637814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.637822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.638044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.638264] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.638271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.638283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.641826] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.650359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.651065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.651608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.651644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.651655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.651892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.652112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.652121] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.652128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.655669] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.664201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.664901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.665392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.665402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.665410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.665629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.665846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.665854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.665860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.669394] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.068 [2024-04-27 02:45:32.678171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.068 [2024-04-27 02:45:32.678885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.679362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.068 [2024-04-27 02:45:32.679373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.068 [2024-04-27 02:45:32.679380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.068 [2024-04-27 02:45:32.679598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.068 [2024-04-27 02:45:32.679815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.068 [2024-04-27 02:45:32.679823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.068 [2024-04-27 02:45:32.679830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.068 [2024-04-27 02:45:32.683356] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.331 [2024-04-27 02:45:32.692094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.331 [2024-04-27 02:45:32.692865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.331 [2024-04-27 02:45:32.693328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.331 [2024-04-27 02:45:32.693338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.331 [2024-04-27 02:45:32.693346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.331 [2024-04-27 02:45:32.693564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.331 [2024-04-27 02:45:32.693780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.331 [2024-04-27 02:45:32.693788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.331 [2024-04-27 02:45:32.693794] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.331 [2024-04-27 02:45:32.697323] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.331 [2024-04-27 02:45:32.706074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.331 [2024-04-27 02:45:32.706922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.331 [2024-04-27 02:45:32.707391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.331 [2024-04-27 02:45:32.707404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.331 [2024-04-27 02:45:32.707415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.331 [2024-04-27 02:45:32.707634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.707851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.707858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.707865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.711397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.719922] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.720438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.720927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.720936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.720943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.721161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.721383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.721392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.721399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.724926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.733865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.734454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.734952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.734962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.734969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.735187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.735409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.735417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.735423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.738949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.747680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.748258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.748675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.748685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.748692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.748914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.749131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.749145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.749151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.752676] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.761618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.762513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.763001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.763014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.763023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.763260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.763486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.763495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.763502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.767039] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.775582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.776083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.776609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.776646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.776657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.776898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.777118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.777127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.777134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.780676] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.789426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.790129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.790607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.790644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.790654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.790891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.791116] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.791125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.791132] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.794683] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.803222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.803927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.804518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.804555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.804566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.804802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.805023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.805031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.805038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.808589] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.817125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.817874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.818327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.818337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.818345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.818563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.818781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.818789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.818796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.822328] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.831063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.332 [2024-04-27 02:45:32.831709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.832163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.332 [2024-04-27 02:45:32.832173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.332 [2024-04-27 02:45:32.832180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.332 [2024-04-27 02:45:32.832409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.332 [2024-04-27 02:45:32.832628] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.332 [2024-04-27 02:45:32.832640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.332 [2024-04-27 02:45:32.832647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.332 [2024-04-27 02:45:32.836166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.332 [2024-04-27 02:45:32.844905] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.845598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.846051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.846061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.846068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.846290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.846508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.846515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.846522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.850050] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.858783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.859555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.860036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.860049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.860058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.860308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.860531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.860539] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.860546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.864072] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.872617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.873468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.873948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.873960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.873969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.874206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.874433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.874441] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.874453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.877990] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.886531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.887219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.887700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.887711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.887719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.887937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.888154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.888162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.888169] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.891703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.900455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.901139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.901678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.901715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.901726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.901962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.902183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.902191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.902199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.905749] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.914290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.914991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.915491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.915528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.915539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.915775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.915996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.916004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.916011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.919557] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.928096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.928825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.929313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.929333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.929341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.929564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.929782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.929790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.929797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.933335] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.333 [2024-04-27 02:45:32.942074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.333 [2024-04-27 02:45:32.942765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.943219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.333 [2024-04-27 02:45:32.943229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.333 [2024-04-27 02:45:32.943236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.333 [2024-04-27 02:45:32.943458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.333 [2024-04-27 02:45:32.943676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.333 [2024-04-27 02:45:32.943684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.333 [2024-04-27 02:45:32.943690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.333 [2024-04-27 02:45:32.947223] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:32.955963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:32.956656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.957113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.957123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:32.957130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:32.957352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:32.957570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:32.957577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:32.957584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:32.961111] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:32.969862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:32.970616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.971097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.971110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:32.971119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:32.971361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:32.971582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:32.971590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:32.971597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:32.975133] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:32.983671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:32.984265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.984721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.984758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:32.984769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:32.985006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:32.985226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:32.985234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:32.985241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:32.988785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:32.997532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:32.998270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.998821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:32.998857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:32.998868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:32.999104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:32.999333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:32.999342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:32.999349] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:33.002892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:33.011443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:33.012104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.012654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.012691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:33.012702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:33.012938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:33.013159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:33.013167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:33.013174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:33.016711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:33.025242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:33.025983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.026509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.026546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:33.026557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:33.026793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:33.027014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:33.027023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:33.027030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:33.030573] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:33.039104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:33.039798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.040253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.040263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:33.040271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:33.040501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:33.040719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:33.040726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:33.040733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:33.044257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:33.052997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:33.053768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.054249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.054262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:33.054271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:33.054513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:33.054734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:33.054742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:33.054749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:33.058291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:33.066818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:33.067636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.068125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.068138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:33.068147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:33.068395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.595 [2024-04-27 02:45:33.068618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.595 [2024-04-27 02:45:33.068626] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.595 [2024-04-27 02:45:33.068633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.595 [2024-04-27 02:45:33.072158] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.595 [2024-04-27 02:45:33.080678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.595 [2024-04-27 02:45:33.081373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.081862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.595 [2024-04-27 02:45:33.081874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.595 [2024-04-27 02:45:33.081884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.595 [2024-04-27 02:45:33.082120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.082353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.082364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.082371] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.085892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.094627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.095377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.095908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.095921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.095934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.096171] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.096416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.096427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.096434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.099960] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.108501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.109076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.109622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.109659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.109669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.109906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.110127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.110135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.110142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.113679] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.122422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.123133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.123693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.123730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.123740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.123977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.124198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.124206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.124213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.127753] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.136293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.137084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.137669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.137705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.137716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.137957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.138178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.138186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.138193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.141734] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.150261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.150958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.151503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.151539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.151550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.151787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.152007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.152015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.152022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.155559] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.164105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.164848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.165206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.165216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.165224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.165447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.165666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.165673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.165680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.169210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.177951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.178721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.179205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.179218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.179228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.179476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.179697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.179705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.179712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.183239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.191811] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.192271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.192644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.192680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.192692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.192931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.193151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.193159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.193167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.196704] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.596 [2024-04-27 02:45:33.205654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.596 [2024-04-27 02:45:33.206457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.206937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.596 [2024-04-27 02:45:33.206950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.596 [2024-04-27 02:45:33.206959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.596 [2024-04-27 02:45:33.207195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.596 [2024-04-27 02:45:33.207429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.596 [2024-04-27 02:45:33.207439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.596 [2024-04-27 02:45:33.207446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.596 [2024-04-27 02:45:33.210975] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.858 [2024-04-27 02:45:33.219522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.858 [2024-04-27 02:45:33.220346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.858 [2024-04-27 02:45:33.220881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.858 [2024-04-27 02:45:33.220893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.858 [2024-04-27 02:45:33.220903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.858 [2024-04-27 02:45:33.221139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.858 [2024-04-27 02:45:33.221382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.858 [2024-04-27 02:45:33.221393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.858 [2024-04-27 02:45:33.221400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.858 [2024-04-27 02:45:33.224922] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.858 [2024-04-27 02:45:33.233445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.858 [2024-04-27 02:45:33.234199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.858 [2024-04-27 02:45:33.234691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.858 [2024-04-27 02:45:33.234706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.858 [2024-04-27 02:45:33.234715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.858 [2024-04-27 02:45:33.234951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.858 [2024-04-27 02:45:33.235172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.858 [2024-04-27 02:45:33.235180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.858 [2024-04-27 02:45:33.235187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.858 [2024-04-27 02:45:33.238721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.858 [2024-04-27 02:45:33.247253] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.858 [2024-04-27 02:45:33.248078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.858 [2024-04-27 02:45:33.248655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.248692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.248702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.248938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.249159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.249167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.249174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.252723] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.261046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.261806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.262489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.262526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.262536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.262773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.262994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.263002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.263014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.266556] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.274877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.275662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.276145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.276157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.276166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.276413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.276635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.276643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.276650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.280177] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.288730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.289564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.290055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.290068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.290078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.290320] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.290542] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.290550] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.290557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.294090] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.302626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.303485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.303879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.303891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.303900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.304136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.304371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.304382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.304393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.307929] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.316472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.317160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.317552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.317562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.317570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.317788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.318005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.318013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.318019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.321554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.330293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.330979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.331433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.331445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.331453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.331671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.331889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.331896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.331903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.335436] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.344162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.344845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.345479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.345516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.345527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.345763] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.345983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.345992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.345999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.349541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.358075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.358862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.359342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.359356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.359365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.859 [2024-04-27 02:45:33.359602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.859 [2024-04-27 02:45:33.359822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.859 [2024-04-27 02:45:33.359830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.859 [2024-04-27 02:45:33.359837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.859 [2024-04-27 02:45:33.363371] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.859 [2024-04-27 02:45:33.371896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.859 [2024-04-27 02:45:33.372680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.373162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.859 [2024-04-27 02:45:33.373175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.859 [2024-04-27 02:45:33.373185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.373434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.373657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.373665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.373672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.377194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.385719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.386534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.387015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.387028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.387037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.387274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.387510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.387519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.387526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.391053] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.399575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.400269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.400609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.400619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.400626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.400844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.401061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.401069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.401076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.404599] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.413432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.414124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.414583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.414594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.414601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.414819] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.415036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.415044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.415051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.418581] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.427314] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.427993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.428448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.428458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.428466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.428683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.428901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.428908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.428915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.432444] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.441175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.441870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.442329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.442341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.442348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.442567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.442784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.442791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.442798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.446327] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.455052] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.455826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.456319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.456343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.456352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.456589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.456809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.456817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.456824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.460361] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.860 [2024-04-27 02:45:33.468888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.860 [2024-04-27 02:45:33.469674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.470161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.860 [2024-04-27 02:45:33.470174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:25:59.860 [2024-04-27 02:45:33.470183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:25:59.860 [2024-04-27 02:45:33.470426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:25:59.860 [2024-04-27 02:45:33.470647] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.860 [2024-04-27 02:45:33.470655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.860 [2024-04-27 02:45:33.470662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.860 [2024-04-27 02:45:33.474207] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.122 [2024-04-27 02:45:33.482754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.122 [2024-04-27 02:45:33.483192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.122 [2024-04-27 02:45:33.483667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.122 [2024-04-27 02:45:33.483678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.122 [2024-04-27 02:45:33.483690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.122 [2024-04-27 02:45:33.483909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.122 [2024-04-27 02:45:33.484127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.122 [2024-04-27 02:45:33.484135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.122 [2024-04-27 02:45:33.484142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.122 [2024-04-27 02:45:33.487673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.122 [2024-04-27 02:45:33.496609] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.122 [2024-04-27 02:45:33.497487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.122 [2024-04-27 02:45:33.497966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.122 [2024-04-27 02:45:33.497979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.122 [2024-04-27 02:45:33.497988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.498225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.498458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.498468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.498476] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.502004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.510551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.511377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.511857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.511870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.511879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.512115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.512348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.512358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.512366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.515890] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.524417] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.525222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.525727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.525741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.525754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.525991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.526211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.526219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.526226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.529762] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.538322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.539062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.539618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.539655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.539665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.539902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.540122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.540130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.540138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.543688] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.552205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.553005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.553501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.553515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.553525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.553761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.553982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.553990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.553997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.557532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.566059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.566728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.567213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.567226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.567235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.567488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.567711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.567719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.567726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.571248] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.579975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.580734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.581214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.581226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.581236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.581484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.581706] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.581714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.581721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.585242] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.593766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.594529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.595009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.595021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.595031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.595267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.595507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.595517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.595525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.599049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.607583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.608224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.608830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.608867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.608877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.609114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.609347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.609357] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.609364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.123 [2024-04-27 02:45:33.612896] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.123 [2024-04-27 02:45:33.621425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.123 [2024-04-27 02:45:33.622193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.622672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.123 [2024-04-27 02:45:33.622686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.123 [2024-04-27 02:45:33.622695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.123 [2024-04-27 02:45:33.622932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.123 [2024-04-27 02:45:33.623151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.123 [2024-04-27 02:45:33.623160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.123 [2024-04-27 02:45:33.623167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.626702] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.635223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.636038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.636510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.636524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.636534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.636770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.636990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.636998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.637006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.640543] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.649085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.649787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.650241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.650251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.650260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.650484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.650703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.650714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.650722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.654255] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.662999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.663712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.664164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.664173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.664181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.664410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.664629] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.664637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.664644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.668167] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.676915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.677699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.678083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.678095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.678105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.678350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.678572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.678580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.678587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.682129] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.690886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.691573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.691920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.691932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.691941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.692178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.692412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.692423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.692435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.695958] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.704728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.705481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.705840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.705852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.705862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.706098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.706347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.706358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.706365] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.709893] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.718647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.719450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.719936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.719949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.719959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.720195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.720422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.720431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.720438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.723970] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.124 [2024-04-27 02:45:33.732519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.124 [2024-04-27 02:45:33.733113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.733678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.124 [2024-04-27 02:45:33.733715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.124 [2024-04-27 02:45:33.733725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.124 [2024-04-27 02:45:33.733962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.124 [2024-04-27 02:45:33.734183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.124 [2024-04-27 02:45:33.734191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.124 [2024-04-27 02:45:33.734198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.124 [2024-04-27 02:45:33.737735] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.386 [2024-04-27 02:45:33.746485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.386 [2024-04-27 02:45:33.747224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.747692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.747705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.386 [2024-04-27 02:45:33.747713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.386 [2024-04-27 02:45:33.747932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.386 [2024-04-27 02:45:33.748149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.386 [2024-04-27 02:45:33.748156] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.386 [2024-04-27 02:45:33.748163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.386 [2024-04-27 02:45:33.751690] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.386 [2024-04-27 02:45:33.760450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.386 [2024-04-27 02:45:33.761139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.761484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.761498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.386 [2024-04-27 02:45:33.761508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.386 [2024-04-27 02:45:33.761744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.386 [2024-04-27 02:45:33.761965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.386 [2024-04-27 02:45:33.761974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.386 [2024-04-27 02:45:33.761981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.386 [2024-04-27 02:45:33.765516] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.386 [2024-04-27 02:45:33.774247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.386 [2024-04-27 02:45:33.775031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.775499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.775513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.386 [2024-04-27 02:45:33.775523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.386 [2024-04-27 02:45:33.775759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.386 [2024-04-27 02:45:33.775980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.386 [2024-04-27 02:45:33.775988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.386 [2024-04-27 02:45:33.775995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.386 [2024-04-27 02:45:33.779530] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.386 [2024-04-27 02:45:33.788052] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.386 [2024-04-27 02:45:33.788817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.789297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.789313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.386 [2024-04-27 02:45:33.789322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.386 [2024-04-27 02:45:33.789559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.386 [2024-04-27 02:45:33.789780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.386 [2024-04-27 02:45:33.789788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.386 [2024-04-27 02:45:33.789795] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.386 [2024-04-27 02:45:33.793337] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.386 [2024-04-27 02:45:33.801879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.386 [2024-04-27 02:45:33.802659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.803144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.803157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.386 [2024-04-27 02:45:33.803166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.386 [2024-04-27 02:45:33.803409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.386 [2024-04-27 02:45:33.803630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.386 [2024-04-27 02:45:33.803638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.386 [2024-04-27 02:45:33.803645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.386 [2024-04-27 02:45:33.807186] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.386 [2024-04-27 02:45:33.815712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.386 [2024-04-27 02:45:33.816307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.386 [2024-04-27 02:45:33.816779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.816789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.816796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.817015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.817232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.817240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.817247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.820782] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.829550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.830376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.830859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.830872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.830881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.831118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.831352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.831362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.831369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.834892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.843411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.844227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.844703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.844718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.844727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.844963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.845184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.845192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.845199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.848733] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.857255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.858014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.858493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.858508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.858517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.858754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.858974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.858982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.858989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.862525] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.871055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.871812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.872290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.872316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.872326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.872563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.872783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.872791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.872798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.876331] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.884853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.885640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.886120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.886133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.886142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.886392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.886614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.886622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.886630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.890156] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.898697] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.899434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.899914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.899927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.899937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.900173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.900406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.900416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.900423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.903946] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.912476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.913206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.913674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.913685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.913697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.913916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.914133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.914141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.914147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.917681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.926411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.927131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.927579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.927590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.927597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.927815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.928032] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.387 [2024-04-27 02:45:33.928040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.387 [2024-04-27 02:45:33.928046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.387 [2024-04-27 02:45:33.931579] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.387 [2024-04-27 02:45:33.940319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.387 [2024-04-27 02:45:33.940995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.941451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.387 [2024-04-27 02:45:33.941461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.387 [2024-04-27 02:45:33.941468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.387 [2024-04-27 02:45:33.941686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.387 [2024-04-27 02:45:33.941904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.388 [2024-04-27 02:45:33.941911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.388 [2024-04-27 02:45:33.941918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.388 [2024-04-27 02:45:33.945448] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.388 [2024-04-27 02:45:33.954181] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.388 [2024-04-27 02:45:33.954905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.955272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.955298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.388 [2024-04-27 02:45:33.955312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.388 [2024-04-27 02:45:33.955570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.388 [2024-04-27 02:45:33.955792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.388 [2024-04-27 02:45:33.955801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.388 [2024-04-27 02:45:33.955808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.388 [2024-04-27 02:45:33.959338] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.388 [2024-04-27 02:45:33.968076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.388 [2024-04-27 02:45:33.968830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.969474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.969511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.388 [2024-04-27 02:45:33.969521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.388 [2024-04-27 02:45:33.969758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.388 [2024-04-27 02:45:33.969978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.388 [2024-04-27 02:45:33.969986] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.388 [2024-04-27 02:45:33.969993] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.388 [2024-04-27 02:45:33.973532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.388 [2024-04-27 02:45:33.981848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.388 [2024-04-27 02:45:33.982664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.983118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.983131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.388 [2024-04-27 02:45:33.983141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.388 [2024-04-27 02:45:33.983389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.388 [2024-04-27 02:45:33.983611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.388 [2024-04-27 02:45:33.983619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.388 [2024-04-27 02:45:33.983626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.388 [2024-04-27 02:45:33.987149] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.388 [2024-04-27 02:45:33.995666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.388 [2024-04-27 02:45:33.996469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.996861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.388 [2024-04-27 02:45:33.996873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.388 [2024-04-27 02:45:33.996882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.388 [2024-04-27 02:45:33.997119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.388 [2024-04-27 02:45:33.997356] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.388 [2024-04-27 02:45:33.997367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.388 [2024-04-27 02:45:33.997374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.388 [2024-04-27 02:45:34.000906] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.650 [2024-04-27 02:45:34.009456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.650 [2024-04-27 02:45:34.010257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.010750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.010764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.650 [2024-04-27 02:45:34.010773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.650 [2024-04-27 02:45:34.011010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.650 [2024-04-27 02:45:34.011230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.650 [2024-04-27 02:45:34.011238] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.650 [2024-04-27 02:45:34.011245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.650 [2024-04-27 02:45:34.014778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.650 [2024-04-27 02:45:34.023333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.650 [2024-04-27 02:45:34.024138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.024693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.024708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.650 [2024-04-27 02:45:34.024717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.650 [2024-04-27 02:45:34.024954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.650 [2024-04-27 02:45:34.025174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.650 [2024-04-27 02:45:34.025183] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.650 [2024-04-27 02:45:34.025190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.650 [2024-04-27 02:45:34.028732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.650 [2024-04-27 02:45:34.037273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.650 [2024-04-27 02:45:34.037968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.038422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.038432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.650 [2024-04-27 02:45:34.038440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.650 [2024-04-27 02:45:34.038658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.650 [2024-04-27 02:45:34.038875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.650 [2024-04-27 02:45:34.038888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.650 [2024-04-27 02:45:34.038895] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.650 [2024-04-27 02:45:34.042437] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.650 [2024-04-27 02:45:34.051202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.650 [2024-04-27 02:45:34.051904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.650 [2024-04-27 02:45:34.052389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.052400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.052408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.052626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.052843] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.052850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.052856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.056397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.065151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.065852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.066221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.066231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.066238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.066460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.066678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.066685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.066694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.070226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.078984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.079679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.080013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.080022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.080029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.080247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.080469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.080477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.080488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.084021] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.092765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.093443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.093898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.093907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.093914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.094132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.094353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.094362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.094368] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.097901] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.106659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.107382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.107854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.107863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.107870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.108088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.108310] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.108318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.108325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.111855] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.120612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.121180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.121657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.121667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.121675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.121892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.122109] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.122116] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.122123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.125666] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.134440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.135126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.135660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.135697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.135708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.135944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.136165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.136173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.136180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.139725] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.148296] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.149014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.149559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.149596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.149606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.149843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.150063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.150071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.150079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.153632] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.162183] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.162976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.163463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.163486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.163495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.163732] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.651 [2024-04-27 02:45:34.163952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.651 [2024-04-27 02:45:34.163960] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.651 [2024-04-27 02:45:34.163967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.651 [2024-04-27 02:45:34.167509] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.651 [2024-04-27 02:45:34.176045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.651 [2024-04-27 02:45:34.176746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.177200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.651 [2024-04-27 02:45:34.177211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.651 [2024-04-27 02:45:34.177218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.651 [2024-04-27 02:45:34.177448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.177667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.177675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.177682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 [2024-04-27 02:45:34.181205] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 [2024-04-27 02:45:34.189959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.652 [2024-04-27 02:45:34.190411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.190867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.190877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.652 [2024-04-27 02:45:34.190884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.652 [2024-04-27 02:45:34.191102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.191325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.191333] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.191339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 [2024-04-27 02:45:34.194884] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 [2024-04-27 02:45:34.203843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.652 [2024-04-27 02:45:34.204411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.204981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.204991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.652 [2024-04-27 02:45:34.204998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.652 [2024-04-27 02:45:34.205216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.205445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.205455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.205462] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 [2024-04-27 02:45:34.208994] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 [2024-04-27 02:45:34.217748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.652 [2024-04-27 02:45:34.218431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.218883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.218892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.652 [2024-04-27 02:45:34.218900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.652 [2024-04-27 02:45:34.219118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.219340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.219348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.219355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 [2024-04-27 02:45:34.222889] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 [2024-04-27 02:45:34.231639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.652 [2024-04-27 02:45:34.232306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.232764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.232774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.652 [2024-04-27 02:45:34.232781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.652 [2024-04-27 02:45:34.232998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.233215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.233222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.233229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 [2024-04-27 02:45:34.236763] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 [2024-04-27 02:45:34.245519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.652 [2024-04-27 02:45:34.246221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.246754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.246764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.652 [2024-04-27 02:45:34.246772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.652 [2024-04-27 02:45:34.246989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.247207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.247214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.247221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 [2024-04-27 02:45:34.250757] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 273403 Killed "${NVMF_APP[@]}" "$@" 00:26:00.652 02:45:34 -- host/bdevperf.sh@36 -- # tgt_init 00:26:00.652 02:45:34 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:00.652 02:45:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:00.652 02:45:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:00.652 02:45:34 -- common/autotest_common.sh@10 -- # set +x 00:26:00.652 [2024-04-27 02:45:34.259313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.652 [2024-04-27 02:45:34.260013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.260478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.652 [2024-04-27 02:45:34.260488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.652 [2024-04-27 02:45:34.260496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.652 [2024-04-27 02:45:34.260714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.652 [2024-04-27 02:45:34.260931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.652 [2024-04-27 02:45:34.260939] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.652 [2024-04-27 02:45:34.260946] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.652 02:45:34 -- nvmf/common.sh@470 -- # nvmfpid=275074 00:26:00.652 02:45:34 -- nvmf/common.sh@471 -- # waitforlisten 275074 00:26:00.652 02:45:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:00.652 02:45:34 -- common/autotest_common.sh@817 -- # '[' -z 275074 ']' 00:26:00.652 02:45:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.652 [2024-04-27 02:45:34.264484] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.652 02:45:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:00.652 02:45:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.652 02:45:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:00.652 02:45:34 -- common/autotest_common.sh@10 -- # set +x 00:26:00.916 [2024-04-27 02:45:34.273235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.916 [2024-04-27 02:45:34.273946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.274408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.274418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.916 [2024-04-27 02:45:34.274426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.916 [2024-04-27 02:45:34.274644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.916 [2024-04-27 02:45:34.274862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.916 [2024-04-27 02:45:34.274870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.916 [2024-04-27 02:45:34.274876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.916 [2024-04-27 02:45:34.278416] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.916 [2024-04-27 02:45:34.287177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.916 [2024-04-27 02:45:34.287869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.288337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.288347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.916 [2024-04-27 02:45:34.288354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.916 [2024-04-27 02:45:34.288577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.916 [2024-04-27 02:45:34.288794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.916 [2024-04-27 02:45:34.288801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.916 [2024-04-27 02:45:34.288808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.916 [2024-04-27 02:45:34.292345] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.916 [2024-04-27 02:45:34.301092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.916 [2024-04-27 02:45:34.301812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.302195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.302205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.916 [2024-04-27 02:45:34.302213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.916 [2024-04-27 02:45:34.302436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.916 [2024-04-27 02:45:34.302654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.916 [2024-04-27 02:45:34.302662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.916 [2024-04-27 02:45:34.302669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.916 [2024-04-27 02:45:34.306199] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.916 [2024-04-27 02:45:34.312716] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:00.916 [2024-04-27 02:45:34.312761] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.916 [2024-04-27 02:45:34.314970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.916 [2024-04-27 02:45:34.315647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.315903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.315914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.916 [2024-04-27 02:45:34.315921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.916 [2024-04-27 02:45:34.316140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.916 [2024-04-27 02:45:34.316364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.916 [2024-04-27 02:45:34.316372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.916 [2024-04-27 02:45:34.316381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.916 [2024-04-27 02:45:34.319910] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.916 [2024-04-27 02:45:34.328870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.916 [2024-04-27 02:45:34.329577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.330040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.916 [2024-04-27 02:45:34.330054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.916 [2024-04-27 02:45:34.330062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.916 [2024-04-27 02:45:34.330287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.916 [2024-04-27 02:45:34.330506] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.916 [2024-04-27 02:45:34.330513] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.916 [2024-04-27 02:45:34.330520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 [2024-04-27 02:45:34.334051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.342812] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.343383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.343872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.343881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.917 [2024-04-27 02:45:34.343888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.917 [2024-04-27 02:45:34.344107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.917 [2024-04-27 02:45:34.344330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.917 [2024-04-27 02:45:34.344339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.917 [2024-04-27 02:45:34.344345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.917 [2024-04-27 02:45:34.347891] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.356645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.357347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.357830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.357839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.917 [2024-04-27 02:45:34.357847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.917 [2024-04-27 02:45:34.358065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.917 [2024-04-27 02:45:34.358287] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.917 [2024-04-27 02:45:34.358294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.917 [2024-04-27 02:45:34.358301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 [2024-04-27 02:45:34.361836] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.370650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.371394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.371909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.371919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.917 [2024-04-27 02:45:34.371930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.917 [2024-04-27 02:45:34.372148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.917 [2024-04-27 02:45:34.372369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.917 [2024-04-27 02:45:34.372377] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.917 [2024-04-27 02:45:34.372384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 [2024-04-27 02:45:34.375926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.378042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:00.917 [2024-04-27 02:45:34.384482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.385184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.385668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.385679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.917 [2024-04-27 02:45:34.385687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.917 [2024-04-27 02:45:34.385906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.917 [2024-04-27 02:45:34.386124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.917 [2024-04-27 02:45:34.386131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.917 [2024-04-27 02:45:34.386138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 [2024-04-27 02:45:34.389674] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.398437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.399126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.399393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.399415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.917 [2024-04-27 02:45:34.399425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.917 [2024-04-27 02:45:34.399652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.917 [2024-04-27 02:45:34.399871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.917 [2024-04-27 02:45:34.399879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.917 [2024-04-27 02:45:34.399886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 [2024-04-27 02:45:34.403435] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.412414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.413131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.413705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.413744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.917 [2024-04-27 02:45:34.413760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.917 [2024-04-27 02:45:34.414001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.917 [2024-04-27 02:45:34.414222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.917 [2024-04-27 02:45:34.414231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.917 [2024-04-27 02:45:34.414238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.917 [2024-04-27 02:45:34.417786] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.917 [2024-04-27 02:45:34.426341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.917 [2024-04-27 02:45:34.427044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.917 [2024-04-27 02:45:34.427603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.427640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.918 [2024-04-27 02:45:34.427651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.918 [2024-04-27 02:45:34.427890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.918 [2024-04-27 02:45:34.428112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.918 [2024-04-27 02:45:34.428120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.918 [2024-04-27 02:45:34.428127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.918 [2024-04-27 02:45:34.431676] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.918 [2024-04-27 02:45:34.440223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.918 [2024-04-27 02:45:34.440806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.441233] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.918 [2024-04-27 02:45:34.441260] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.918 [2024-04-27 02:45:34.441262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.441268] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.918 [2024-04-27 02:45:34.441275] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.918 [2024-04-27 02:45:34.441273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.918 [2024-04-27 02:45:34.441286] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.918 [2024-04-27 02:45:34.441292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.918 [2024-04-27 02:45:34.441400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.918 [2024-04-27 02:45:34.441517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.918 [2024-04-27 02:45:34.441636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.918 [2024-04-27 02:45:34.441639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.918 [2024-04-27 02:45:34.441734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.918 [2024-04-27 02:45:34.441742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.918 [2024-04-27 02:45:34.441754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.918 [2024-04-27 02:45:34.445291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.918 [2024-04-27 02:45:34.454046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.918 [2024-04-27 02:45:34.454766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.455024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.455035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.918 [2024-04-27 02:45:34.455042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.918 [2024-04-27 02:45:34.455261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.918 [2024-04-27 02:45:34.455485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.918 [2024-04-27 02:45:34.455493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.918 [2024-04-27 02:45:34.455500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.918 [2024-04-27 02:45:34.459034] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.918 [2024-04-27 02:45:34.468159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.918 [2024-04-27 02:45:34.468699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.469167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.469176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.918 [2024-04-27 02:45:34.469184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.918 [2024-04-27 02:45:34.469416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.918 [2024-04-27 02:45:34.469637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.918 [2024-04-27 02:45:34.469645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.918 [2024-04-27 02:45:34.469652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.918 [2024-04-27 02:45:34.473175] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.918 [2024-04-27 02:45:34.482143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.918 [2024-04-27 02:45:34.482852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.483224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.483234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.918 [2024-04-27 02:45:34.483242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.918 [2024-04-27 02:45:34.483466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.918 [2024-04-27 02:45:34.483683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.918 [2024-04-27 02:45:34.483691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.918 [2024-04-27 02:45:34.483698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.918 [2024-04-27 02:45:34.487241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.918 [2024-04-27 02:45:34.496121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.918 [2024-04-27 02:45:34.496869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.497492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.918 [2024-04-27 02:45:34.497533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.918 [2024-04-27 02:45:34.497544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.918 [2024-04-27 02:45:34.497787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.918 [2024-04-27 02:45:34.498008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.918 [2024-04-27 02:45:34.498016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.919 [2024-04-27 02:45:34.498023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.919 [2024-04-27 02:45:34.501572] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.919 [2024-04-27 02:45:34.509927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.919 [2024-04-27 02:45:34.510636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.919 [2024-04-27 02:45:34.511135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.919 [2024-04-27 02:45:34.511145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.919 [2024-04-27 02:45:34.511153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.919 [2024-04-27 02:45:34.511377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.919 [2024-04-27 02:45:34.511596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.919 [2024-04-27 02:45:34.511603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.919 [2024-04-27 02:45:34.511610] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.919 [2024-04-27 02:45:34.515149] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.919 [2024-04-27 02:45:34.523911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.919 [2024-04-27 02:45:34.524694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.919 [2024-04-27 02:45:34.525154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.919 [2024-04-27 02:45:34.525167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:00.919 [2024-04-27 02:45:34.525177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:00.919 [2024-04-27 02:45:34.525431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:00.919 [2024-04-27 02:45:34.525654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.919 [2024-04-27 02:45:34.525662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.919 [2024-04-27 02:45:34.525669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.919 [2024-04-27 02:45:34.529197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.537758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.538319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.538760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.538770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.538778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.539001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.182 [2024-04-27 02:45:34.539219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.182 [2024-04-27 02:45:34.539227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.182 [2024-04-27 02:45:34.539234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.182 [2024-04-27 02:45:34.542778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.551556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.552166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.552651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.552661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.552669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.552887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.182 [2024-04-27 02:45:34.553105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.182 [2024-04-27 02:45:34.553112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.182 [2024-04-27 02:45:34.553119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.182 [2024-04-27 02:45:34.556658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.565422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.566109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.566686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.566723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.566734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.566970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.182 [2024-04-27 02:45:34.567191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.182 [2024-04-27 02:45:34.567199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.182 [2024-04-27 02:45:34.567206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.182 [2024-04-27 02:45:34.570755] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.579359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.579820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.580295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.580307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.580315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.580534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.182 [2024-04-27 02:45:34.580752] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.182 [2024-04-27 02:45:34.580759] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.182 [2024-04-27 02:45:34.580766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.182 [2024-04-27 02:45:34.584305] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.593269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.593969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.594427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.594438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.594445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.594663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.182 [2024-04-27 02:45:34.594880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.182 [2024-04-27 02:45:34.594888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.182 [2024-04-27 02:45:34.594894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.182 [2024-04-27 02:45:34.598434] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.607193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.607884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.608347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.608358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.608365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.608583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.182 [2024-04-27 02:45:34.608801] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.182 [2024-04-27 02:45:34.608808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.182 [2024-04-27 02:45:34.608815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.182 [2024-04-27 02:45:34.612357] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.182 [2024-04-27 02:45:34.621109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.182 [2024-04-27 02:45:34.621818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.622273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.182 [2024-04-27 02:45:34.622292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.182 [2024-04-27 02:45:34.622300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.182 [2024-04-27 02:45:34.622518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.622735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.622742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.622748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.626285] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.635037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.635735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.636196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.636206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.636213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.636436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.636654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.636662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.636668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.640203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.648965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.649643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.650104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.650114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.650121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.650343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.650560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.650568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.650575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.654110] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.662953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.663642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.663978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.663988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.663999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.664218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.664439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.664447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.664454] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.667987] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.676750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.677320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.677679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.677690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.677697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.677916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.678133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.678140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.678147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.681690] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.690655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.691351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.691862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.691871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.691878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.692096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.692317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.692326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.692333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.695867] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.704629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.705106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.705556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.705567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.705575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.705796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.706013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.706021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.706028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.709576] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.718544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.719276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.719797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.719807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.719814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.720032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.720249] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.720256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.720263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.723800] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.732350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.733041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.733290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.733305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.733312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.733530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.733748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.733755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.733762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.737300] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.183 [2024-04-27 02:45:34.746264] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.183 [2024-04-27 02:45:34.746966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.747511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.183 [2024-04-27 02:45:34.747547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.183 [2024-04-27 02:45:34.747560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.183 [2024-04-27 02:45:34.747798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.183 [2024-04-27 02:45:34.748023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.183 [2024-04-27 02:45:34.748032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.183 [2024-04-27 02:45:34.748039] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.183 [2024-04-27 02:45:34.751593] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.184 [2024-04-27 02:45:34.760175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.184 [2024-04-27 02:45:34.760924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.184 [2024-04-27 02:45:34.761484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.184 [2024-04-27 02:45:34.761521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.184 [2024-04-27 02:45:34.761531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.184 [2024-04-27 02:45:34.761768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.184 [2024-04-27 02:45:34.761989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.184 [2024-04-27 02:45:34.761997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.184 [2024-04-27 02:45:34.762005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.184 [2024-04-27 02:45:34.765550] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.184 [2024-04-27 02:45:34.774097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.184 [2024-04-27 02:45:34.774813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.184 [2024-04-27 02:45:34.774935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.184 [2024-04-27 02:45:34.774951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.184 [2024-04-27 02:45:34.774959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.184 [2024-04-27 02:45:34.775181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.184 [2024-04-27 02:45:34.775406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.184 [2024-04-27 02:45:34.775415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.184 [2024-04-27 02:45:34.775422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.184 [2024-04-27 02:45:34.778962] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.184 [2024-04-27 02:45:34.787946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.184 [2024-04-27 02:45:34.788676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.184 [2024-04-27 02:45:34.789172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.184 [2024-04-27 02:45:34.789182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.184 [2024-04-27 02:45:34.789189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.184 [2024-04-27 02:45:34.789417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.184 [2024-04-27 02:45:34.789636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.184 [2024-04-27 02:45:34.789648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.184 [2024-04-27 02:45:34.789655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.184 [2024-04-27 02:45:34.793181] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.446 [2024-04-27 02:45:34.801731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.802291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.802642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.802651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.802658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.802876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.803093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.803100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.803107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.806651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.815605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.816288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.816728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.816738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.816745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.816963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.817180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.817188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.817195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.820728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.829520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.830117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.830679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.830716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.830726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.830963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.831184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.831192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.831204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.834757] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.843305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.844109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.844378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.844392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.844402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.844639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.844859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.844868] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.844875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.848411] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.857169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.857872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.858503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.858540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.858550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.858787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.859008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.859016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.859023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.862564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.871096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.871886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.872380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.872394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.872403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.872640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.872861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.872869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.872876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.876418] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.884960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.885793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.886299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.886318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.886331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.886573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.886795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.886803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.886810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.890344] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.898875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.899695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.900179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.900192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.900201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.900444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.900665] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.900673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.900680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.904211] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.912762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.913376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.913913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.913927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.447 [2024-04-27 02:45:34.913936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.447 [2024-04-27 02:45:34.914173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.447 [2024-04-27 02:45:34.914402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.447 [2024-04-27 02:45:34.914411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.447 [2024-04-27 02:45:34.914418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.447 [2024-04-27 02:45:34.917952] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.447 [2024-04-27 02:45:34.926725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.447 [2024-04-27 02:45:34.927127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.927659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.447 [2024-04-27 02:45:34.927696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:34.927707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:34.927944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:34.928164] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:34.928173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:34.928180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:34.931721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:34.940668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:34.941487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.941844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.941857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:34.941867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:34.942103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:34.942329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:34.942338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:34.942345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:34.945880] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:34.954617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:34.955362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.955840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.955850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:34.955858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:34.956076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:34.956297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:34.956305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:34.956311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:34.959837] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:34.968578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:34.969273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.969721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.969731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:34.969738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:34.969956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:34.970173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:34.970181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:34.970187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:34.973719] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:34.982490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:34.983017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.983673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.983711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:34.983722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:34.983958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:34.984178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:34.984187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:34.984194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:34.987737] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:34.996305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:34.996989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.997542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:34.997579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:34.997590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:34.997826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:34.998047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:34.998055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:34.998062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:35.001603] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:35.010144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:35.010836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.011291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.011314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:35.011326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:35.011569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:35.011791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:35.011800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:35.011808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:35.015340] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:35.024084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:35.024964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.025374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.025390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:35.025399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:35.025635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:35.025857] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:35.025865] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:35.025873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:35.029420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:35.037949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:35.038723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.039209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.039222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:35.039231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:35.039474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:35.039695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.448 [2024-04-27 02:45:35.039704] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.448 [2024-04-27 02:45:35.039711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.448 [2024-04-27 02:45:35.043248] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.448 [2024-04-27 02:45:35.051784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.448 [2024-04-27 02:45:35.052480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.052974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.448 [2024-04-27 02:45:35.052987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.448 [2024-04-27 02:45:35.053001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.448 [2024-04-27 02:45:35.053237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.448 [2024-04-27 02:45:35.053465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.449 [2024-04-27 02:45:35.053474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.449 [2024-04-27 02:45:35.053481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.449 [2024-04-27 02:45:35.057018] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.711 [2024-04-27 02:45:35.065758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.711 [2024-04-27 02:45:35.066548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.067034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.067047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.711 [2024-04-27 02:45:35.067056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.711 [2024-04-27 02:45:35.067299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.711 [2024-04-27 02:45:35.067520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.711 [2024-04-27 02:45:35.067528] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.711 [2024-04-27 02:45:35.067536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.711 [2024-04-27 02:45:35.071068] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.711 [2024-04-27 02:45:35.079600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.711 [2024-04-27 02:45:35.080515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.080995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.081008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.711 [2024-04-27 02:45:35.081018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.711 [2024-04-27 02:45:35.081254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.711 [2024-04-27 02:45:35.081480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.711 [2024-04-27 02:45:35.081489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.711 [2024-04-27 02:45:35.081496] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.711 [2024-04-27 02:45:35.085027] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.711 02:45:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:01.711 02:45:35 -- common/autotest_common.sh@850 -- # return 0 00:26:01.711 02:45:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:01.711 02:45:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:01.711 02:45:35 -- common/autotest_common.sh@10 -- # set +x 00:26:01.711 [2024-04-27 02:45:35.093573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.711 [2024-04-27 02:45:35.094151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.094639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.094656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.711 [2024-04-27 02:45:35.094664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.711 [2024-04-27 02:45:35.094882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.711 [2024-04-27 02:45:35.095099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.711 [2024-04-27 02:45:35.095106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.711 [2024-04-27 02:45:35.095113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.711 [2024-04-27 02:45:35.098649] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.711 [2024-04-27 02:45:35.107401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.711 [2024-04-27 02:45:35.108137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.108695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.108732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.711 [2024-04-27 02:45:35.108743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.711 [2024-04-27 02:45:35.108980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.711 [2024-04-27 02:45:35.109201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.711 [2024-04-27 02:45:35.109210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.711 [2024-04-27 02:45:35.109217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.711 [2024-04-27 02:45:35.112757] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.711 [2024-04-27 02:45:35.121303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.711 [2024-04-27 02:45:35.121810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.122270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.122285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.711 [2024-04-27 02:45:35.122294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.711 [2024-04-27 02:45:35.122513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.711 [2024-04-27 02:45:35.122731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.711 [2024-04-27 02:45:35.122738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.711 [2024-04-27 02:45:35.122745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.711 [2024-04-27 02:45:35.126272] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.711 02:45:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.711 02:45:35 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.711 02:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.711 02:45:35 -- common/autotest_common.sh@10 -- # set +x 00:26:01.711 [2024-04-27 02:45:35.133164] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.711 [2024-04-27 02:45:35.135219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.711 [2024-04-27 02:45:35.135921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.711 [2024-04-27 02:45:35.136482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.136519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.712 [2024-04-27 02:45:35.136529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.712 [2024-04-27 02:45:35.136766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.712 [2024-04-27 02:45:35.136987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.712 [2024-04-27 02:45:35.136995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.712 [2024-04-27 02:45:35.137002] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.712 02:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.712 02:45:35 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:01.712 02:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.712 02:45:35 -- common/autotest_common.sh@10 -- # set +x 00:26:01.712 [2024-04-27 02:45:35.140546] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.712 [2024-04-27 02:45:35.149076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.712 [2024-04-27 02:45:35.149823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.150183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.150193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.712 [2024-04-27 02:45:35.150201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.712 [2024-04-27 02:45:35.150423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.712 [2024-04-27 02:45:35.150641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.712 [2024-04-27 02:45:35.150649] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.712 [2024-04-27 02:45:35.150656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.712 [2024-04-27 02:45:35.154182] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.712 [2024-04-27 02:45:35.162920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.712 [2024-04-27 02:45:35.163721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.164209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.164222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.712 [2024-04-27 02:45:35.164231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.712 [2024-04-27 02:45:35.164474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.712 [2024-04-27 02:45:35.164695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.712 [2024-04-27 02:45:35.164703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.712 [2024-04-27 02:45:35.164710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.712 [2024-04-27 02:45:35.168246] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.712 Malloc0 00:26:01.712 02:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.712 02:45:35 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.712 02:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.712 02:45:35 -- common/autotest_common.sh@10 -- # set +x 00:26:01.712 [2024-04-27 02:45:35.176801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.712 [2024-04-27 02:45:35.177494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.177986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.177999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.712 [2024-04-27 02:45:35.178008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.712 [2024-04-27 02:45:35.178245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.712 [2024-04-27 02:45:35.178472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.712 [2024-04-27 02:45:35.178481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.712 [2024-04-27 02:45:35.178488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.712 [2024-04-27 02:45:35.182024] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.712 02:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.712 02:45:35 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.712 02:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.712 02:45:35 -- common/autotest_common.sh@10 -- # set +x 00:26:01.712 [2024-04-27 02:45:35.190765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.712 [2024-04-27 02:45:35.191597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.192087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.192100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cb2b0 with addr=10.0.0.2, port=4420 00:26:01.712 [2024-04-27 02:45:35.192110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb2b0 is same with the state(5) to be set 00:26:01.712 [2024-04-27 02:45:35.192352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.712 [2024-04-27 02:45:35.192573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.712 [2024-04-27 02:45:35.192581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.712 [2024-04-27 02:45:35.192588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.712 [2024-04-27 02:45:35.196126] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.712 02:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.712 02:45:35 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.712 02:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.712 02:45:35 -- common/autotest_common.sh@10 -- # set +x 00:26:01.712 [2024-04-27 02:45:35.204692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.712 [2024-04-27 02:45:35.205317] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.712 [2024-04-27 02:45:35.205502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.712 [2024-04-27 02:45:35.208208] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:01.712 [2024-04-27 02:45:35.208248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (107): Transport endpoint is not connected 00:26:01.712 [2024-04-27 02:45:35.208513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cb2b0 (9): Bad file descriptor 00:26:01.712 [2024-04-27 02:45:35.208734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.712 [2024-04-27 02:45:35.208743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.712 [2024-04-27 02:45:35.208750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.712 02:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.712 02:45:35 -- host/bdevperf.sh@38 -- # wait 273976 00:26:01.712 [2024-04-27 02:45:35.212291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.712 [2024-04-27 02:45:35.218534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.712 [2024-04-27 02:45:35.300163] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:11.719 00:26:11.719 Latency(us) 00:26:11.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:11.719 Verification LBA range: start 0x0 length 0x4000 00:26:11.719 Nvme1n1 : 15.01 6650.86 25.98 9502.61 0.00 7898.30 1051.31 18131.63 00:26:11.719 =================================================================================================================== 00:26:11.719 Total : 6650.86 25.98 9502.61 0.00 7898.30 1051.31 18131.63 00:26:11.719 02:45:43 -- host/bdevperf.sh@39 -- # sync 00:26:11.719 02:45:43 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.719 02:45:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.719 02:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:11.719 02:45:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.719 02:45:43 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:11.719 02:45:43 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:11.719 02:45:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:11.719 02:45:43 -- nvmf/common.sh@117 -- # sync 00:26:11.719 02:45:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.719 02:45:43 -- nvmf/common.sh@120 -- # set +e 00:26:11.719 02:45:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.719 02:45:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.719 rmmod nvme_tcp 00:26:11.719 rmmod nvme_fabrics 00:26:11.719 rmmod nvme_keyring 00:26:11.719 02:45:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.719 02:45:43 -- nvmf/common.sh@124 -- # set -e 00:26:11.719 02:45:43 -- nvmf/common.sh@125 -- # return 0 00:26:11.719 02:45:43 -- nvmf/common.sh@478 -- # '[' -n 275074 ']' 00:26:11.719 02:45:43 -- nvmf/common.sh@479 -- # killprocess 275074 00:26:11.719 02:45:43 -- common/autotest_common.sh@936 -- # '[' -z 275074 ']' 00:26:11.719 02:45:43 -- common/autotest_common.sh@940 -- # kill -0 275074 00:26:11.719 02:45:43 -- common/autotest_common.sh@941 -- # uname 00:26:11.719 02:45:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:11.719 02:45:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 275074 00:26:11.719 02:45:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:11.719 02:45:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:11.719 02:45:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 275074' 00:26:11.719 killing process with pid 275074 00:26:11.719 02:45:43 -- common/autotest_common.sh@955 -- # kill 275074 00:26:11.719 02:45:43 -- common/autotest_common.sh@960 -- # wait 275074 00:26:11.719 02:45:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:11.719 02:45:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:11.719 02:45:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:11.719 02:45:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.719 02:45:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.719 02:45:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.719 02:45:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.719 02:45:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.661 02:45:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:12.661 00:26:12.661 real 0m27.539s 00:26:12.661 user 1m3.222s 00:26:12.661 sys 0m6.724s 00:26:12.661 02:45:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:12.661 02:45:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.661 ************************************ 00:26:12.661 END TEST nvmf_bdevperf 00:26:12.661 ************************************ 00:26:12.661 02:45:46 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:12.661 02:45:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:12.661 02:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.661 02:45:46 -- common/autotest_common.sh@10 -- # set +x 00:26:12.923 ************************************ 00:26:12.923 START TEST nvmf_target_disconnect 00:26:12.923 ************************************ 00:26:12.923 02:45:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:12.923 * Looking for test storage... 00:26:12.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:12.923 02:45:46 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.923 02:45:46 -- nvmf/common.sh@7 -- # uname -s 00:26:12.923 02:45:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.923 02:45:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.923 02:45:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.923 02:45:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.923 02:45:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.923 02:45:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.923 02:45:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.923 02:45:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.923 02:45:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.923 02:45:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.923 02:45:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:12.923 02:45:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:12.923 02:45:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.923 02:45:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.923 02:45:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.923 02:45:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.923 02:45:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.923 02:45:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.923 02:45:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.923 02:45:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.923 02:45:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.923 02:45:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.923 02:45:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.923 02:45:46 -- paths/export.sh@5 -- # export PATH 00:26:12.923 02:45:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.923 02:45:46 -- nvmf/common.sh@47 -- # : 0 00:26:12.923 02:45:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:12.923 02:45:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:12.923 02:45:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.923 02:45:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.923 02:45:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.923 02:45:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:12.923 02:45:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:12.923 02:45:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:12.923 02:45:46 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:12.923 02:45:46 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:12.923 02:45:46 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:12.923 02:45:46 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:12.923 02:45:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:12.923 02:45:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.923 02:45:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:12.923 02:45:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:12.923 02:45:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:12.923 02:45:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.923 02:45:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.923 02:45:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.923 02:45:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:12.923 02:45:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:12.923 02:45:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:12.923 02:45:46 -- common/autotest_common.sh@10 -- # set +x 00:26:19.575 02:45:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:19.575 02:45:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.575 02:45:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.575 02:45:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.575 02:45:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.575 02:45:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.575 02:45:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.575 02:45:52 -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.575 02:45:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.575 02:45:52 -- nvmf/common.sh@296 -- # e810=() 00:26:19.575 02:45:52 -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.575 02:45:52 -- nvmf/common.sh@297 -- # x722=() 00:26:19.575 02:45:52 -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.575 02:45:52 -- nvmf/common.sh@298 -- # mlx=() 00:26:19.575 02:45:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.575 02:45:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.575 02:45:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.575 02:45:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.575 02:45:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.575 02:45:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.575 02:45:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:19.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:19.575 02:45:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.575 02:45:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:19.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:19.575 02:45:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.575 02:45:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.575 02:45:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.575 02:45:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:19.575 02:45:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.575 02:45:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:19.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:19.575 02:45:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.575 02:45:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.575 02:45:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.575 02:45:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:19.575 02:45:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.575 02:45:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:19.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:19.575 02:45:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.575 02:45:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:19.575 02:45:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:19.575 02:45:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:19.575 02:45:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:19.575 02:45:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.575 02:45:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.575 02:45:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.575 02:45:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.575 02:45:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.575 02:45:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.575 02:45:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.575 02:45:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.575 02:45:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.575 02:45:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.575 02:45:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.575 02:45:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.575 02:45:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.575 02:45:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.575 02:45:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.575 02:45:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.575 02:45:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.575 02:45:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.575 02:45:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.575 02:45:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:26:19.575 00:26:19.575 --- 10.0.0.2 ping statistics --- 00:26:19.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.575 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:26:19.575 02:45:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:26:19.575 00:26:19.575 --- 10.0.0.1 ping statistics --- 00:26:19.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.575 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:26:19.575 02:45:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.575 02:45:53 -- nvmf/common.sh@411 -- # return 0 00:26:19.575 02:45:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:19.575 02:45:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.575 02:45:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:19.575 02:45:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:19.575 02:45:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.575 02:45:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:19.575 02:45:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:19.575 02:45:53 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:19.575 02:45:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:19.575 02:45:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.575 02:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:19.839 ************************************ 00:26:19.839 START TEST nvmf_target_disconnect_tc1 00:26:19.839 ************************************ 00:26:19.839 02:45:53 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:19.839 02:45:53 -- host/target_disconnect.sh@32 -- # set +e 00:26:19.839 02:45:53 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.839 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.839 [2024-04-27 02:45:53.363497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.839 [2024-04-27 02:45:53.363916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.839 [2024-04-27 02:45:53.363930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fadc50 with addr=10.0.0.2, port=4420 00:26:19.839 [2024-04-27 02:45:53.363960] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:19.839 [2024-04-27 02:45:53.363976] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:19.839 [2024-04-27 02:45:53.363984] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:19.839 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:19.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:19.839 Initializing NVMe Controllers 00:26:19.839 02:45:53 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:19.839 02:45:53 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:19.839 02:45:53 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:19.839 02:45:53 -- common/autotest_common.sh@1139 -- # return 0 00:26:19.839 02:45:53 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:19.839 02:45:53 -- host/target_disconnect.sh@41 -- # set -e 00:26:19.839 00:26:19.839 real 0m0.103s 00:26:19.839 user 0m0.041s 00:26:19.839 sys 0m0.063s 00:26:19.839 02:45:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:19.839 02:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:19.839 ************************************ 00:26:19.839 END TEST nvmf_target_disconnect_tc1 00:26:19.839 ************************************ 00:26:19.839 02:45:53 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:19.839 02:45:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:19.839 02:45:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.839 02:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 ************************************ 00:26:20.100 START TEST nvmf_target_disconnect_tc2 00:26:20.100 ************************************ 00:26:20.100 02:45:53 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:20.100 02:45:53 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:20.100 02:45:53 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:20.100 02:45:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:20.100 02:45:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:20.100 02:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 02:45:53 -- nvmf/common.sh@470 -- # nvmfpid=281209 00:26:20.100 02:45:53 -- nvmf/common.sh@471 -- # waitforlisten 281209 00:26:20.100 02:45:53 -- common/autotest_common.sh@817 -- # '[' -z 281209 ']' 00:26:20.100 02:45:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.100 02:45:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.100 02:45:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.100 02:45:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.100 02:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 02:45:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:20.100 [2024-04-27 02:45:53.620121] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:20.100 [2024-04-27 02:45:53.620174] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.100 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.100 [2024-04-27 02:45:53.705149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.361 [2024-04-27 02:45:53.768921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.361 [2024-04-27 02:45:53.768954] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.361 [2024-04-27 02:45:53.768962] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.361 [2024-04-27 02:45:53.768968] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.361 [2024-04-27 02:45:53.768974] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.361 [2024-04-27 02:45:53.769111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:20.361 [2024-04-27 02:45:53.769259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:20.361 [2024-04-27 02:45:53.769423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:20.361 [2024-04-27 02:45:53.769519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:20.933 02:45:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:20.933 02:45:54 -- common/autotest_common.sh@850 -- # return 0 00:26:20.933 02:45:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:20.933 02:45:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:20.933 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.933 02:45:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.933 02:45:54 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.933 02:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.933 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.933 Malloc0 00:26:20.933 02:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.933 02:45:54 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:20.933 02:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.933 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.933 [2024-04-27 02:45:54.499102] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.933 02:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.933 02:45:54 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.933 02:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.933 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.933 02:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.933 02:45:54 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.933 02:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.933 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.933 02:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.934 02:45:54 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.934 02:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.934 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:20.934 [2024-04-27 02:45:54.539443] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.934 02:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.934 02:45:54 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:20.934 02:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.934 02:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.194 02:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.194 02:45:54 -- host/target_disconnect.sh@50 -- # reconnectpid=281390 00:26:21.194 02:45:54 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:21.194 02:45:54 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:21.194 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.111 02:45:56 -- host/target_disconnect.sh@53 -- # kill -9 281209 00:26:23.111 02:45:56 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Write completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Write completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Read completed with error (sct=0, sc=8) 00:26:23.111 starting I/O failed 00:26:23.111 Write completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Write completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Write completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Write completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Write completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Write completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 Read completed with error (sct=0, sc=8) 00:26:23.112 starting I/O failed 00:26:23.112 [2024-04-27 02:45:56.572377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:23.112 [2024-04-27 02:45:56.572984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.573572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.573603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.574091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.574569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.574603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.575091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.575678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.575706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.576191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.576787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.576816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.577169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.577592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.577622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.578117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.578326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.578340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.578807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.579261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.579268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.579739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.580240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.580248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.580811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.581503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.581531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.581883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.582499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.582528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.582985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.583534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.583563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.584001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.584400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.584408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.584896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.585400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.585408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.585871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.586180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.586187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.586556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.586923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.586931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.587410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.587905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.587913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.588382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.588601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.588613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.589083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.589433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.589440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.589930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.590381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.590389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.590630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.591090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.591098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.591565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.592057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.592064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.592292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.592676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.592684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.593168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.593656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.593664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.112 [2024-04-27 02:45:56.594142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.594585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.112 [2024-04-27 02:45:56.594614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.112 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.595056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.595607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.595636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.596120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.596578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.596607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.597065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.597657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.597686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.598187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.598683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.598712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.599155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.599555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.599584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.599936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.600293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.600302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.600768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.601219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.601226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.601729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.602103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.602110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.602698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.603219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.603228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.603778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.604183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.604193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.604679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.605173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.605181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.605647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.606136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.606144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.606701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.607219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.607228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.607744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.608106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.608117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.608637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.609124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.609133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.609694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.610180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.610190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.610743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.611251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.611260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.611813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.612436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.612466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.612927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.613515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.613544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.614031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.614513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.614542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.614996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.615579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.615607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.616096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.616692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.616720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.617207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.617685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.617714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.618202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.618662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.618670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.619132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.619701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.619731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.620167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.620707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.620737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.621105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.621545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.621573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.113 qpair failed and we were unable to recover it. 00:26:23.113 [2024-04-27 02:45:56.622038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.622620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.113 [2024-04-27 02:45:56.622648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.623143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.623711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.623739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.624192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.624720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.624749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.625198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.625624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.625632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.625856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.626327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.626336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.626824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.627313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.627321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.627801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.628155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.628162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.628610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.629093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.629101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.629486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.629975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.629982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.630534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.631067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.631077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.631734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.632261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.632271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.632499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.632960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.632969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.633543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.634065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.634075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.634655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.635173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.635183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.635642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.636086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.636093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.636670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.637060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.637070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.637541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.638366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.638385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.638862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.639370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.639379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.639835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.640332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.640340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.640718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.641164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.641171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.641645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.642095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.642102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.642562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.643055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.643063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.643651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.644125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.644134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.644709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.645184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.645194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.645778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.646442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.646472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.646947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.647518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.647547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.648033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.648573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.648601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.649087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.649669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.649699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.114 qpair failed and we were unable to recover it. 00:26:23.114 [2024-04-27 02:45:56.650188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.114 [2024-04-27 02:45:56.650685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.650693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.651177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.651733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.651762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.652302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.652640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.652649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.653142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.653595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.653602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.654083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.654667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.654696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.655147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.655724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.655753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.656241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.656741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.656750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.657199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.657784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.657813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.658289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.658794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.658801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.659280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.659723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.659731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.660214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.660643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.660673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.661171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.661630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.661638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.662087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.662670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.662698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.662912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.663254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.663263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.663593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.664047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.664054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.664623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.665139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.665149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.665712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.666184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.666194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.666707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.667183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.667193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.667655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.668105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.668113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.668674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.669148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.669157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.669531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.670033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.670041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.670574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.671054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.671064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.671639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.672109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.672119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.672596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.673122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.673132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.673701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.674223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.674233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.674792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.675431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.675460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.675852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.676496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.676525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.676880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.677288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.677296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.115 [2024-04-27 02:45:56.677778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.678096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.115 [2024-04-27 02:45:56.678104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.115 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.678447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.678898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.678906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.679390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.679881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.679889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.680372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.680704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.680712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.681181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.681642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.681649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.682129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.682609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.682616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.682965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.683492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.683521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.684012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.684524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.684553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.684783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.685216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.685224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.685682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.686176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.686184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.686649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.687033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.687041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.687297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.687798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.687806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.688155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.688601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.688609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.688955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.689430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.689460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.689928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.690422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.690430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.690910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.691299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.691307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.691794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.692134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.692142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.692588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.693077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.693084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.693660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.694180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.694190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.116 qpair failed and we were unable to recover it. 00:26:23.116 [2024-04-27 02:45:56.694657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.695153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.116 [2024-04-27 02:45:56.695160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.695706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.696226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.696236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.696582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.697084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.697095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.697571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.698091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.698100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.698688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.699191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.699200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.699765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.700153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.700164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.700619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.700980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.700991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.701567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.702061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.702071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.702609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.703127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.703137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.703521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.704178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.704196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.704674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.705180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.705188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.705649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.706104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.706114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.706597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.706977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.706984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.707558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.708057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.708066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.708622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.709145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.709155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.709511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.709991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.709999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.710587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.711121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.711134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.711692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.712207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.712217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.712689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.713191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.713199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.713735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.714258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.714268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.714776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.715227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.715234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.715783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.716260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.716270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.716866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.717499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.717528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.717993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.718498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.718526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.719016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.719551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.719580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.720104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.720656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.720685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.721169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.721662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.721673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.722135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.722664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.722693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.723218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.723750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.117 [2024-04-27 02:45:56.723780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.117 qpair failed and we were unable to recover it. 00:26:23.117 [2024-04-27 02:45:56.724246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.118 [2024-04-27 02:45:56.724815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.118 [2024-04-27 02:45:56.724844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.118 qpair failed and we were unable to recover it. 00:26:23.118 [2024-04-27 02:45:56.725200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.118 [2024-04-27 02:45:56.725733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.118 [2024-04-27 02:45:56.725763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.118 qpair failed and we were unable to recover it. 00:26:23.118 [2024-04-27 02:45:56.726227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.118 [2024-04-27 02:45:56.726792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.118 [2024-04-27 02:45:56.726822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.118 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.727168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.727638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.727646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.728138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.728645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.728674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.729169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.729698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.729727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.730218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.730701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.730708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.731184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.731729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.731762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.732302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.732816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.732825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.733185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.733548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.733556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.734013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.734245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.734257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.734705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.735073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.735081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.735641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.735964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.735972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.736523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.736902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.736912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.737390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.737724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.737732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.738226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.738676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.738683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.384 qpair failed and we were unable to recover it. 00:26:23.384 [2024-04-27 02:45:56.739176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.384 [2024-04-27 02:45:56.739666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.739673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.740167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.740614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.740621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.741069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.741624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.741652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.742104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.742685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.742714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.742942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.743294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.743303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.743839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.744302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.744310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.744830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.745281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.745289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.745740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.746199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.746208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.746682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.747032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.747040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.747485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.747972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.747981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.748574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.748955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.748964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.749456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.749923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.749931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.750403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.750902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.750909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.751363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.751838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.751845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.752347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.752822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.752830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.753314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.753790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.753797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.753909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.754363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.754372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.754819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.755314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.755322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.755796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.756272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.756283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.756738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.757242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.757250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.757623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.758108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.758115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.758700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.759175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.759185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.759526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.759988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.759995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.760591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.761078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.761089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.761663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.762184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.762194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.762651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.763110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.763117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.763678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.764064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.764074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.764663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.765128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.765138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.385 [2024-04-27 02:45:56.765717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.766222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.385 [2024-04-27 02:45:56.766232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.385 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.766625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.767016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.767026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.767606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.768100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.768110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.768694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.769073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.769084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.769580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.770112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.770122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.770705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.771192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.771201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.771769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.772270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.772285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.772738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.773239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.773249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.773571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.774104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.774116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.774652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.775147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.775157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.775645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.776146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.776154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.776736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.777176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.777187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.777769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.778268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.778290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.778823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.779073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.779087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.779659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.780174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.780184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.780649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.781106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.781114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.781648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.782189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.782199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.782746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.783249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.783258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.783873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.784106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.784119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.784646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.785031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.785040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.785519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.785976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.785984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.786357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.786819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.786826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.787180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.787632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.787639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.788145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.788580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.788587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.789074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.789617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.789645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.790138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.790712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.790741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.791218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.791817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.791847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.792212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.792761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.792790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.793151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.793721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.793750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.386 qpair failed and we were unable to recover it. 00:26:23.386 [2024-04-27 02:45:56.794129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.386 [2024-04-27 02:45:56.794659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.794689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.795054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.795704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.795733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.796224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.796820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.796849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.797498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.797873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.797882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.797972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.798415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.798424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.798932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.799314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.799322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.799803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.800157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.800166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.800638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.801077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.801085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.801569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.802054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.802061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.802649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.803166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.803176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.803640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.804126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.804134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.804627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.805146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.805156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.805626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.806127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.806134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.806511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.807039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.807049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.807619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.808100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.808110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.808574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.808988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.808999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.809572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.810061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.810071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.810539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.811074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.811084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.811653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.812149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.812159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.812392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.812858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.812866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.813349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.813832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.813839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.814190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.814542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.814550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.815007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.815510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.815518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.815866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.816319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.816327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.816779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.817229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.817236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.817707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.818048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.818056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.818392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.818800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.818807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.819273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.819775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.819782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.820268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.820576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.387 [2024-04-27 02:45:56.820604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.387 qpair failed and we were unable to recover it. 00:26:23.387 [2024-04-27 02:45:56.821070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.821654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.821683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.822172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.822816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.822824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.823273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.823860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.823888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.824503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.824888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.824898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.825504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.825889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.825899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.826365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.826886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.826894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.827391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.827862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.827870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.828353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.828800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.828808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.829333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.829685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.829693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.830236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.830710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.830719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.831133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.831584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.831613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.831972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.832490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.832499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.832947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.833460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.833490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.833946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.834354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.834362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.834843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.835355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.835362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.835864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.836402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.836409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.836896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.837218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.837225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.837704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.838153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.838161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.838558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.839077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.839088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.839677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.840197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.840207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.840660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.841189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.841199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.841671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.842166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.842173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.842728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.843021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.843029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.843625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.844126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.844136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.844703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.845129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.845139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.845705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.846206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.846215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.846782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.847491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.847520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.847911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.848507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.848536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.388 qpair failed and we were unable to recover it. 00:26:23.388 [2024-04-27 02:45:56.848884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.388 [2024-04-27 02:45:56.849388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.849396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.849829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.850316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.850324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.850770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.851273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.851285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.851772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.852269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.852280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.852770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.853093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.853100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.853572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.854093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.854102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.854664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.855150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.855159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.855732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.856112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.856122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.856700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.857225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.857239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.857824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.858186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.858195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.858452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.858783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.858791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.859266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.859516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.859525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.859897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.860245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.860252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.860520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.860861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.860868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.861226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.861514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.861523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.861888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.862381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.862389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.862791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.863270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.863283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.863790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.864244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.864252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.864722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.865137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.865148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.865756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.866206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.866215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.866651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.867175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.867186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.867427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.867939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.867947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.868485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.868994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.869003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.869498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.869858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.389 [2024-04-27 02:45:56.869866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.389 qpair failed and we were unable to recover it. 00:26:23.389 [2024-04-27 02:45:56.870347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.870818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.870826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.871157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.871523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.871531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.872022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.872610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.872640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.873103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.873653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.873682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.874107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.874696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.874728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.875204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.875783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.875812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.876271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.876872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.876901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.877512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.878037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.878047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.878608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.879114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.879124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.879605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.880009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.880018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.880621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.881102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.881111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.881701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.882189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.882199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.882561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.883050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.883061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.883607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.884086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.884096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.884570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.885028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.885041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.885664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.886167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.886177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.886551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.886970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.886977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.887596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.888122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.888132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.888733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.889246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.889256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.889745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.890286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.890297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.890895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.891525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.891555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.892039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.892553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.892582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.893111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.893649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.893679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.894149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.894711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.894740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.895210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.895759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.895788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.896240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.896757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.896786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.897516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.897990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.897999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.898484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.898866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.898875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.390 qpair failed and we were unable to recover it. 00:26:23.390 [2024-04-27 02:45:56.899251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.390 [2024-04-27 02:45:56.899740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.899748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.900205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.900774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.900803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.901158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.901614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.901643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.902118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.902632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.902661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.903009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.903618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.903647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.904108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.904692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.904721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.905217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.905796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.905825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.906461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.906900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.906910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.907516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.908002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.908012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.908609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.909115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.909125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.909697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.910238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.910248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.910729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.911238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.911248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.911749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.912237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.912247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.912729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.912965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.912978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.913451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.913789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.913796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.914262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.914733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.914741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.915222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.915709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.915717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.916169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.916646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.916654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.916877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.917337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.917345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.917824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.918250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.918257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.918736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.919235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.919243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.919707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.920085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.920092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.920756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.921148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.921158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.921806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.922455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.922483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.922973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.923518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.923547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.924037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.924548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.924577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.925042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.925273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.925298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.925822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.926464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.926493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.926963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.927514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.927544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.391 qpair failed and we were unable to recover it. 00:26:23.391 [2024-04-27 02:45:56.927910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.391 [2024-04-27 02:45:56.928494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.928524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.928985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.929595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.929624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.930106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.930580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.930609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.930954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.931460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.931468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.931975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.932242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.932249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.932719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.933076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.933083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.933649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.934159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.934169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.934676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.935016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.935024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.935601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.936088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.936098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.936574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.937067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.937076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.937665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.938153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.938162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.938626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.938996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.939003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.939558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.940062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.940072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.940685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.941078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.941088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.941611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.942100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.942110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.942597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.942959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.942970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.943569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.943949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.943959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.944456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.944922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.944929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.945536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.946051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.946060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.946628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.947121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.947131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.947624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.947991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.948002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.948579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.949106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.949116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.949592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.950107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.950117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.950536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.950920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.950930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.951287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.951761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.951769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.952252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.952842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.952871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.953499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.953984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.953994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.954555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.955036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.955045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.955609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.956127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.392 [2024-04-27 02:45:56.956137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.392 qpair failed and we were unable to recover it. 00:26:23.392 [2024-04-27 02:45:56.956746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.957109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.957119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.957588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.958113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.958123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.958554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.959062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.959072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.959654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.960122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.960131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.960626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.961106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.961116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.961694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.962218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.962228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.962733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.963260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.963270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.963749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.964113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.964123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.964704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.965145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.965155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.965743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.966246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.966256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.966831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.967268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.967286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.967860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.968505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.968534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.968978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.969525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.969554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.969928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.970254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.970262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.970661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.971154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.971161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.971500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.972002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.972009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.972580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.973059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.973069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.973648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.973876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.973886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.974430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.974870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.974878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.975381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.975770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.975778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.976274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.976789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.976797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.977250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.977783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.977812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.978285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.978816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.978845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.979499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.979944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.979954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.393 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-27 02:45:56.980568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.981059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.393 [2024-04-27 02:45:56.981069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.981661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.982050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.982060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.982614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.983121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.983130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.983595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.984077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.984087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.984688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.985191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.985200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.985750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.986139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.986149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.986712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.987199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.987209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.987786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.988284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.988295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.988757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.989286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.989297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.989946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.990545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.990574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.991024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.991517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.991546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.992024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.992638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.992667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.993110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.993689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.993718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.994184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.994656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.994686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.995017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.995637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.995666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.996129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.996707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.394 [2024-04-27 02:45:56.996735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-27 02:45:56.997206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:56.997738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:56.997768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:56.997992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:56.998428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:56.998437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:56.998791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:56.999241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:56.999249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:56.999732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.000180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.000188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.000698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.001192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.001200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.001681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.002126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.002133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.002686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.003201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.003211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.003590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.004084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.004094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.004712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.004977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.004987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.005562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.006054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.006067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-27 02:45:57.006512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.006992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.660 [2024-04-27 02:45:57.007002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.007584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.008107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.008117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.008573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.008939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.008949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.009465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.009826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.009836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.010381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.010753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.010760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.011224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.011664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.011671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.012163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.012612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.012619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.013082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.013575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.013604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.014091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.014678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.014707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.015160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.015612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.015624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.015988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.016566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.016595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.017069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.017633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.017662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.018132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.018596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.018624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.019113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.019597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.019626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.020098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.020685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.020714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.021163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.021582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.021611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.022090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.022560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.022589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.023056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.023643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.023671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.024142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.024706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.024735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.025088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.025492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.025524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.025873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.026505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.026534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.026990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.027517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.027546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.028018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.028465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.028494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.028961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.029419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.029427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.029871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.030247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.030255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.030485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.030709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.030721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.031204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.031680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.031688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.032225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.032704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.032712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.033194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.033595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.661 [2024-04-27 02:45:57.033602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.661 qpair failed and we were unable to recover it. 00:26:23.661 [2024-04-27 02:45:57.034088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.034579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.034612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.035065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.035560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.035589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.036101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.036617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.036647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.036979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.037508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.037537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.038026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.038580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.038609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.039078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.039706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.039735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.040227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.040725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.040755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.041246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.041805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.041834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.042307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.042768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.042776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.043275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.043664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.043671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.044165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.044510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.044517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.044899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.045331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.045339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.045871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.046319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.046327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.046842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.047254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.047262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.047735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.048165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.048172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.048446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.048900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.048907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.049385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.049752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.049760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.050206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.050519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.050527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.050970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.051315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.051323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.051787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.052241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.052248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.052718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.053165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.053171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.053512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.053976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.053983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.054484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.054933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.054940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.055263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.055616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.055624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.055970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.056293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.056301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.056755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.057205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.057212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.057591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.058035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.058043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.058434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.058774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.058781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.662 [2024-04-27 02:45:57.059258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.059736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.662 [2024-04-27 02:45:57.059744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.662 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.060102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.060610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.060639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.061112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.061657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.061686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.062148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.062582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.062610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.063098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.063725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.063754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.064301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.064675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.064682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.065170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.065685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.065693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.066150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.066622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.066651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.067014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.067485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.067514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.067980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.068584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.068614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.069086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.069564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.069594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.070054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.070566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.070595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.071083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.071609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.071638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.072131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.072699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.072728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.073101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.073336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.073349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.073723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.074224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.074232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.074705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.075006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.075014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.075640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.075883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.075893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.076369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.076825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.076833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.077296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.077652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.077659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.078151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.078483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.078491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.078986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.079477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.079484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.079929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.080376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.080383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.080862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.081297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.081305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.081783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.082235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.082242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.082711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.083158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.083166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.083634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.084084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.084092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.084641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.085139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.085148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.085732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.086245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.086255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.663 qpair failed and we were unable to recover it. 00:26:23.663 [2024-04-27 02:45:57.086830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.663 [2024-04-27 02:45:57.087248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.087257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.087526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.088006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.088017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.088609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.089090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.089101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.089637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.090160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.090169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.090544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.090990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.090998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.091215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.091649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.091658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.092148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.092720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.092749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.093216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.093765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.093794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.094146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.094817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.094846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.095509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.095982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.095993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.096189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.096552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.096560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.096980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.097568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.097597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.098089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.098578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.098607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.099075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.099503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.099532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.099986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.100573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.100602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.101091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.101579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.101609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.102062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.102639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.102667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.103141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.103606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.103635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.104067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.104648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.104677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.105135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.105678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.105707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.106157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.106596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.106625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.107094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.107692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.107721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.108171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.108762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.108791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.109282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.109877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.109906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.110127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.110643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.110671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.111140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.111698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.111727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.111950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.112541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.112570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.113060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.113555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.113584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.114074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.114640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.114669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.664 qpair failed and we were unable to recover it. 00:26:23.664 [2024-04-27 02:45:57.115138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.664 [2024-04-27 02:45:57.115564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.115593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.116081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.116630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.116659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.117031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.117140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.117150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.117613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.118063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.118072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.118643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.119121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.119131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.119607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.120128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.120138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.120698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.121220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.121230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.121732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.122226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.122237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.122835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.123508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.123537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.123993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.124495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.124524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.125018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.125603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.125632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.125998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.126494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.126523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.126896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.127255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.127263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.127865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.128473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.128502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.129013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.129604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.129634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.130107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.130688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.130717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.131154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.131540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.131568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.132071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.132637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.132667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.133156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.133734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.133763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.134223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.134686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.134715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.135178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.135639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.135668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.136126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.136708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.136737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.137205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.137827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.137857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.665 [2024-04-27 02:45:57.138483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.138986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.665 [2024-04-27 02:45:57.138996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.665 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.139488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.139971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.139980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.140557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.140919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.140930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.141408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.141856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.141864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.142341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.142843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.142850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.143310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.143749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.143755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.144201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.144479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.144487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.144963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.145421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.145428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.145876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.146287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.146295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.146780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.147272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.147289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.147651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.147999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.148006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.148452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.148740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.148747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.149224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.149564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.149572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.149941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.150358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.150366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.150784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.151099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.151107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.151477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.151951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.151958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.152450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.152666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.152678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.153178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.153518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.153526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.154037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.154485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.154493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.154972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.155374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.155382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.155875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.156333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.156340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.156823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.157226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.157234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.157477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.157965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.157975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.158455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.158815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.158822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.159155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.159513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.159520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.159865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.160078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.160089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.160527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.160881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.160888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.161374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.161827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.161834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.162204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.162619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.162627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.666 qpair failed and we were unable to recover it. 00:26:23.666 [2024-04-27 02:45:57.162992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.666 [2024-04-27 02:45:57.163487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.163496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.163977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.164567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.164596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.165020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.165560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.165589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.166054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.166562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.166594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.167076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.167607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.167636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.168012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.168618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.168647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.169125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.169558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.169586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.169953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.170517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.170546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.171022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.171259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.171272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.171759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.172217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.172224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.172741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.173110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.173121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.173581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.174059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.174069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.174649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.175141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.175151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.175621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.176136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.176150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.176707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.177257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.177268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.177731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.178217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.178227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.178795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.179285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.179296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.179810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.180484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.180513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.180986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.181551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.181580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.182047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.182660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.182689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.183195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.183660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.183668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.184084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.184681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.184710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.184956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.185560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.185589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.185948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.186438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.186450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.186926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.187386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.187393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.187757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.188236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.188244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.188740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.189050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.189057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.189504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.189989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.190000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.190592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.191078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.191088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.667 qpair failed and we were unable to recover it. 00:26:23.667 [2024-04-27 02:45:57.191717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.667 [2024-04-27 02:45:57.192226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.192236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.192849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.193491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.193520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.193894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.194384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.194393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.194888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.195380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.195388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.195880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.196248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.196256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.196730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.197228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.197235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.197895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.198490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.198519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.198743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.199082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.199090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.199444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.199906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.199913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.200399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.200871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.200878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.201363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.201814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.201822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.202302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.202786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.202793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.203236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.203721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.203730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.204218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.204691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.204699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.205149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.205519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.205549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.206039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.206647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.206675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.207165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.207643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.207651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.207999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.208563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.208592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.209091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.209681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.209710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.210163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.210352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.210360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.210755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.211208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.211216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.211566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.212015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.212022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.212488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.212961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.212968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.213235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.213735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.213743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.214200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.214640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.214648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.215099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.215708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.215737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.216224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.216789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.216818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.217262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.217840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.217869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.218492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.219004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.219014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.668 qpair failed and we were unable to recover it. 00:26:23.668 [2024-04-27 02:45:57.219562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.668 [2024-04-27 02:45:57.220037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.220047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.220700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.221204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.221213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.221649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.222134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.222144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.222725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.223177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.223187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.223753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.224104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.224114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.224623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.225145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.225154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.225740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.226071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.226078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.226539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.227038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.227048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.227613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.227960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.227971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.228609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.229092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.229101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.229662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.230151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.230161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.230551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.231007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.231015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.231582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.232100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.232110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.232588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.233061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.233071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.233638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.234161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.234171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.234549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.235039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.235047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.235595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.236050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.236060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.236575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.237103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.237112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.237701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.238142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.238152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.238527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.239028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.239040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.239628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.240120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.240129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.240541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.241064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.241075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.669 qpair failed and we were unable to recover it. 00:26:23.669 [2024-04-27 02:45:57.241573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.242051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.669 [2024-04-27 02:45:57.242061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.242622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.243139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.243149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.243722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.244176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.244186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.244749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.245231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.245241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.245545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.246060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.246071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.246596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.247122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.247131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.247612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.248138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.248148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.248816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.249210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.249219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.249799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.250493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.250522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.250873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.251130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.251137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.251591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.252062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.252072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.252665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.253069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.253079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.253558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.254065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.254075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.254354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.254842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.254851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.255317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.255836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.255844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.256097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.256542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.256550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.256996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.257547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.257576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.257931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.258289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.258298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.258757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.259212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.259220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.259760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.260166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.260175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.260666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.261122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.261130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.261689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.262193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.262203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.262587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.263065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.263075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.263669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.264160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.264170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.264672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.264904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.264916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.265256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.265615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.265622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.266074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.266625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.266654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.267133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.267697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.267727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.268089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.268646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.268675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.670 qpair failed and we were unable to recover it. 00:26:23.670 [2024-04-27 02:45:57.269036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.269646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.670 [2024-04-27 02:45:57.269675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.671 qpair failed and we were unable to recover it. 00:26:23.671 [2024-04-27 02:45:57.270179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.270624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.270633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.671 qpair failed and we were unable to recover it. 00:26:23.671 [2024-04-27 02:45:57.271089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.271654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.271683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.671 qpair failed and we were unable to recover it. 00:26:23.671 [2024-04-27 02:45:57.272146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.272723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.272752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.671 qpair failed and we were unable to recover it. 00:26:23.671 [2024-04-27 02:45:57.273214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.273773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.671 [2024-04-27 02:45:57.273802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.671 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.274023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.274534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.274543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.275041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.275632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.275661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.276166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.276644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.276653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.277127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.277698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.277727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.278225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.278655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.278684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.279171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.279651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.279660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.280192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.280736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.280765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.281304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.281747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.281755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.282296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.282757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.282764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.283230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.283744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.283752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.284211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.284709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.284717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.285079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.285505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.285534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.285883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.286354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.286362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.286712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.287183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.287190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.287749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.288053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.288060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.288402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.288874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.288882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.289256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.289612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.289620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.290116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.290505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.290534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.291021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.291582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.291611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.292111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.292602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.292631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.293098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.293681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.293710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.294251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.294768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.294797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.295282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.295869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.295898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.296122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.296689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.296718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.296940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.297382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.297390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.297858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.298253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.298260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.298599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.299015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.299022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.299598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.300086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.937 [2024-04-27 02:45:57.300095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.937 qpair failed and we were unable to recover it. 00:26:23.937 [2024-04-27 02:45:57.300706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.301269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.301285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.301746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.302076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.302084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.302638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.303169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.303179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.303638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.304147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.304157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.304531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.305012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.305019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.305539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.306009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.306019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.306571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.307026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.307037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.307511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.307961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.307970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.308354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.308802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.308810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.309283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.309637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.309644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.310126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.310546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.310574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.311066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.311607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.311636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.312127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.312701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.312733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.313189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.313763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.313791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.314017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.314572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.314600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.315061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.315604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.315632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.316092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.316611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.316640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.317096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.317475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.317504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.317993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.318503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.318532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.319040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.319553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.319582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.319940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.320398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.320406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.320856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.321320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.321328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.321811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.322217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.322227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.322464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.322898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.322905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.323394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.323843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.323851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.324299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.324772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.324780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.325269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.325730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.325738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.326101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.326518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.326546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.938 [2024-04-27 02:45:57.326903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.327235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.938 [2024-04-27 02:45:57.327243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.938 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.327737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.328079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.328087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.328672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.329194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.329205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.329715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.330177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.330184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.330803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.331505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.331538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.331996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.332556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.332585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.333079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.333668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.333696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.334193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.334644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.334652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.335117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.335666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.335695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.336206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.336748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.336777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.337152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.337732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.337761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.338254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.338745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.338774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.339175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.339787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.339816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.340255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.340848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.340876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.341103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.341688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.341720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.342209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.342683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.342692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.343154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.343643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.343672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.344211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.344651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.344681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.345027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.345514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.345544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.346033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.346636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.346665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.347117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.347566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.347595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.348058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.348606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.348635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.349117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.349695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.349724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.350189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.350759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.350788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.351497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.351966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.351976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.352492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.353013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.353022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.353519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.354037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.354047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.354591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.355069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.355079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.355613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.356003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.356013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.939 [2024-04-27 02:45:57.356609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.357090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.939 [2024-04-27 02:45:57.357099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.939 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.357691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.358065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.358075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.358573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.359084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.359094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.359691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.360185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.360195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.360865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.361543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.361572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.362058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.362620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.362649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.363152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.363718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.363747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.364196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.364861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.364890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.365238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.365805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.365834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.366462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.366965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.366975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.367565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.367788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.367798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.368285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.368802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.368809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.369286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.369835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.369864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.370487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.371012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.371022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.371609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.372128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.372138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.372738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.373038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.373047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.373606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.374073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.374084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.374679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.375165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.375175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.375761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.376148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.376157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.376664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.377115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.377123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.377576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.378050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.378060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.378632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.379138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.379147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.379633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.380194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.380204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.380616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.381120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.381130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.381718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.382231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.382241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.382552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.382921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.382932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.383464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.383856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.383864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.384339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.384777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.384784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.385245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.385695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.385702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.940 qpair failed and we were unable to recover it. 00:26:23.940 [2024-04-27 02:45:57.386172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.386595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.940 [2024-04-27 02:45:57.386603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.387041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.387595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.387624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.387860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.388384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.388392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.388605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.388948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.388955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.389431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.389921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.389928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.390151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.390692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.390701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.391178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.391645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.391652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.392146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.392768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.392798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.393247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.393797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.393825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.394466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.394910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.394921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.395305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.395784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.395791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.396181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.396659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.396666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.397006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.397470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.397478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.397820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.398159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.398166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.398542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.398996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.399004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.399457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.399924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.399932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.400016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.400505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.400515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.400979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.401436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.401444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.401809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.402260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.402267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.402731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.403236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.403244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.403628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.403815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.403826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.404294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.404813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.404820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.405280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.405660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.405668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.406112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.406648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.406676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.941 qpair failed and we were unable to recover it. 00:26:23.941 [2024-04-27 02:45:57.407128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.941 [2024-04-27 02:45:57.407598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.407627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.408082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.408594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.408624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.409108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.409555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.409584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.410044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.410615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.410644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.411024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.411532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.411561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.412048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.412517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.412547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.412963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.413558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.413587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.414043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.414593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.414622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.415070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.415548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.415576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.415946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.416397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.416405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.416865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.417189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.417196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.417666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.418133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.418140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.418698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.419215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.419225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.419802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.420292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.420303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.420696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.421190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.421197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.421783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.422147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.422158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.422637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.423136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.423144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.423733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.424206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.424215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.424791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.425193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.425203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.425783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.426431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.426461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.426925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.427548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.427577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.427952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.428449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.428457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.428948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.429515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.429544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.430034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.430597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.430626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.431118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.431671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.431700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.432186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.432621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.432629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.432983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.433486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.433515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.434002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.434494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.434531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.942 [2024-04-27 02:45:57.435023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.435618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.942 [2024-04-27 02:45:57.435647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.942 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.436000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.436560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.436589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.437030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.437567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.437596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.437951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.438434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.438443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.438937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.439354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.439362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.439845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.440202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.440209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.440578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.441023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.441031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.441479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.441981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.441989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.442567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.443066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.443075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.443644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.444162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.444172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.444655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.445157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.445165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.445710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.446156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.446164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.446535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.446989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.446997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.447572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.447957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.447967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.448442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.448893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.448900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.449387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.449869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.449877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.450356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.450823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.450831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.451323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.451816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.451824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.452295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.452733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.452740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.452953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.453363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.453372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.453850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.454305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.454312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.454808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.455307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.455315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.455759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.456252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.456260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.456722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.457216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.457224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.457659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.458111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.458118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.458518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.459000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.459014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.459491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.459973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.459982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.460554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.461039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.461049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.461637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.461891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.461906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.943 [2024-04-27 02:45:57.462419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.462912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.943 [2024-04-27 02:45:57.462919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.943 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.463406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.463898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.463906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.464386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.464859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.464867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.465353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.465536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.465547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.465971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.466454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.466462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.466902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.467403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.467410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.467790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.468242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.468252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.468746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.469242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.469250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.469706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.470194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.470201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.470433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.470912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.470920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.471363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.471830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.471837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.472317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.472676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.472683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.473009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.473357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.473364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.473848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.474292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.474300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.474787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.475180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.475187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.475661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.476103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.476111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.476445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.476902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.476913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.477402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.477901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.477908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.478294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.478793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.478800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.479243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.479639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.479646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.480084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.480625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.480654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.481142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.481715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.481745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.482233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.482755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.482785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.483042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.483429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.483439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.483928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.484522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.484551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.485035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.485625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.485655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.486145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.486705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.486738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.487235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.487796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.487825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.488481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.489002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.489012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.944 qpair failed and we were unable to recover it. 00:26:23.944 [2024-04-27 02:45:57.489607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.490086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.944 [2024-04-27 02:45:57.490095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.490661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.491138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.491148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.491705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.492178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.492188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.492733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.493251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.493262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.493839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.494467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.494496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.494686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.495175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.495183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.495540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.495991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.495999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.496450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.496941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.496949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.497439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.497846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.497853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.498180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.498493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.498501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.498989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.499436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.499444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.499928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.500375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.500383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.500882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.501373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.501380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.501870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.502360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.502367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.502852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.503299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.503307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.503758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.504248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.504256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.504733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.505218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.505225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.505674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.506162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.506169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.506661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.507160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.507169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.507613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.508029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.508036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.508615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.509088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.509098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.509685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.510169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.510180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.510501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.510872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.510880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.511371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.511863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.511870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.512351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.512843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.512850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.513305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.513758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.513765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.514247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.514703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.514711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.514935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.515392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.515400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.515822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.516309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.516317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.945 qpair failed and we were unable to recover it. 00:26:23.945 [2024-04-27 02:45:57.516801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.517209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.945 [2024-04-27 02:45:57.517216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.517740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.518186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.518193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.518549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.519040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.519049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.519534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.520020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.520028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.520600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.521119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.521129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.521705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.522191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.522201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.522757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.523139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.523150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.523706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.524226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.524237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.524792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.525481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.525510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.525882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.526486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.526515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.526882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.527383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.527391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.527884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.528375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.528383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.528871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.529251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.529259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.529742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.530054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.530061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.530576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.531075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.531085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.531672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.532215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.532225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.532770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.533299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.533318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.533799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.534293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.534301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.534794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.535291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.535299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.535792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.536283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.536291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.536618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.537108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.537115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.537646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.538134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.538143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.538705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.539177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.539188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.539751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.540273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.540289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.540843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.541469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.541499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.946 qpair failed and we were unable to recover it. 00:26:23.946 [2024-04-27 02:45:57.541986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.946 [2024-04-27 02:45:57.542563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.542591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.543064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.543570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.543600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.544085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.544681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.544710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.545160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.545632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.545640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.546134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.546543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.546573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.547062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.547610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.547639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.548123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.548623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.548652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.549140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.549726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.947 [2024-04-27 02:45:57.549755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:23.947 qpair failed and we were unable to recover it. 00:26:23.947 [2024-04-27 02:45:57.550226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.550771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.550801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.551255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.551843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.551872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.552478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.552952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.552962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.553547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.554026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.554036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.554591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.555109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.555119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.555699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.556173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.556182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.556541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.557007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.557018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.557602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.558123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.558132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.558701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.559205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.559215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.559765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.560286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.560296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.560651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.561111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.214 [2024-04-27 02:45:57.561119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.214 qpair failed and we were unable to recover it. 00:26:24.214 [2024-04-27 02:45:57.561698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.562219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.562229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.562872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.563514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.563543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.564031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.564584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.564613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.565108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.565639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.565668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.566111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.566674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.566703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.567173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.567574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.567603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.568089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.568675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.568704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.569186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.569524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.569533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.569990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.570551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.570580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.571053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.571603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.571632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.572118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.572696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.572725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.573243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.573746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.573775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.574260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.574851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.574881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.575484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.576008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.576019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.576606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.577130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.577141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.577673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.578193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.578203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.578724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.579246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.579256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.579587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.580073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.580085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.580629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.581099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.581110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.581661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.582138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.582148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.582707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.583183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.583194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.583739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.584129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.584140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.584688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.585172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.585182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.585749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.586273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.586290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.586748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.587201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.587210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.587793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.588270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.588287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.588809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.589288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.589298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.589839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.590473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.590502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.215 [2024-04-27 02:45:57.590987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.591580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.215 [2024-04-27 02:45:57.591608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.215 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.592081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.592633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.592662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.593113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.593675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.593704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.594191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.594745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.594775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.595257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.595823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.595852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.596494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.597014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.597024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.597599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.597948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.597958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.598181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.598646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.598655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.599144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.599721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.599751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.600229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.600736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.600744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.601193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.601738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.601767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.602257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.602794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.602823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.603044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.603506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.603515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.603980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.604549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.604578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.605013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.605598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.605627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.606014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.606611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.606641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.607129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.607706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.607735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.608206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.608755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.608784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.609270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.609833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.609862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.610210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.610766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.610795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.611287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.611717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.611746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.612208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.612763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.612792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.613258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.613814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.613843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.614463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.614941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.614951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.615176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.615638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.615647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.616184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.616612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.616619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.616969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.617445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.617453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.617935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.618428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.618439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.618890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.619296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.619304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.216 qpair failed and we were unable to recover it. 00:26:24.216 [2024-04-27 02:45:57.619648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.216 [2024-04-27 02:45:57.620131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.620138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.620529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.621016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.621023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.621604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.622116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.622126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.622686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.623159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.623169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.623664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.624169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.624176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.624652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.625145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.625153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.625711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.626235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.626245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.626580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.627016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.627027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.627593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.628071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.628085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.628665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.629180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.629190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.629544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.630028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.630036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.630606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.631082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.631092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.631656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.632145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.632155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.632644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.633094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.633102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.633647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.634163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.634173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.634650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.635145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.635153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.635734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.636238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.636247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.636840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.637466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.637495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.637946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.638541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.638574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.639066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.639658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.639687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.640161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.640625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.640633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.640994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.641509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.641537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.642023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.642633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.642663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.643182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.643609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.643617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.644082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.644620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.644649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.644868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.645345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.645354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.645817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.646310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.646318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.646676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.647167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.647175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.647658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.648150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.648162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.217 qpair failed and we were unable to recover it. 00:26:24.217 [2024-04-27 02:45:57.648647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.217 [2024-04-27 02:45:57.649139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.649147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.649603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.650049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.650057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.650635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.650981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.650991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.651535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.652055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.652065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.652663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.653050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.653060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.653644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.654170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.654180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.654648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.655147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.655155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.655701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.656061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.656071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.656620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.657094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.657104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.657648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.658036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.658046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.658568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.658918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.658928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.659493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.659979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.659988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.660455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.660943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.660951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.661526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.661889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.661900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.662361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.662569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.662580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.663054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.663546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.663554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.664005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.664452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.664460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.664905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.665396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.665403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.665890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.666335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.666343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.666806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.667304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.667312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.667680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.668129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.668136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.668622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.669112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.669120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.669693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.670181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.670191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.670676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.671168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.671175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.671630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.672082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.672090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.672660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.673185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.673195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.673759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.674284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.674294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.674643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.675133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.675141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.218 [2024-04-27 02:45:57.675585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.676051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.218 [2024-04-27 02:45:57.676061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.218 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.676642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.677125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.677135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.677719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.678102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.678112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.678583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.679100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.679110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.679700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.680176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.680185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.680709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.681094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.681104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.681653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.682177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.682188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.682649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.683146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.683153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.683704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.683889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.683902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.684374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.684829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.684836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.685285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.685755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.685762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.686231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.686715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.686723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.687213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.687584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.687591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.688075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.688679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.688709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.689195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.689632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.689640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.690160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.690702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.690730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.691191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.691729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.691758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.692244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.692826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.692855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.693467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.693988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.693997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.694561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.694944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.694954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.695477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.695996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.696005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.696488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.696939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.696946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.697489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.697972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.697982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.219 qpair failed and we were unable to recover it. 00:26:24.219 [2024-04-27 02:45:57.698536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.219 [2024-04-27 02:45:57.699022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.699032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.699618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.700059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.700069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.700609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.701133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.701143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.701730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.702213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.702223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.702775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.703025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.703039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.703547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.703785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.703797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.704136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.704596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.704603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.704975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.705554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.705583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.705988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.706584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.706613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.707139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.707678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.707707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.708196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.708509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.708517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.708996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.709529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.709558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.709928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.710421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.710430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.710919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.711249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.711257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.711724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.712218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.712226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.712681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.713019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.713029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.713599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.714074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.714084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.714686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.715203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.715213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.715680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.715918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.715931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.716512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.716993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.717002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.717465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.717915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.717923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.718517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.718999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.719009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.719496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.719986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.719993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.720524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.720908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.720918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.721392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.721893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.721900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.220 [2024-04-27 02:45:57.722383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.722747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.220 [2024-04-27 02:45:57.722754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.220 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.722982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.723442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.723451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.723939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.724314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.724321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.724795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.725285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.725294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.725722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.726211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.726218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.726675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.727034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.727041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.727514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.728005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.728012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.728562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.728921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.728930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.729129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.729605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.729613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.730063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.730609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.730638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.730992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.731571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.731600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.731937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.732402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.732410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.732864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.733332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.733339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.733791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.734283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.734291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.734762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.735083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.735092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.735634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.736158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.736167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.736661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.737152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.737160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.737721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.738203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.738213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.738670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.739209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.739219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.739679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.740181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.740190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.740742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.741213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.741224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.741694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.742190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.742198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.742726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.743251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.743261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.743816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.744454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.744483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.744855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.745355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.745364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.745894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.746338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.746352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.746817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.747265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.747274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.747651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.748235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.748253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.748729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.749225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.221 [2024-04-27 02:45:57.749232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.221 qpair failed and we were unable to recover it. 00:26:24.221 [2024-04-27 02:45:57.749751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.750199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.750207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.750563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.751042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.751050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.751529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.752009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.752020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.752628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.753153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.753163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.753635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.754124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.754132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.754672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.755158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.755168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.755658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.756157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.756166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.756626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.756987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.756995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.757483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.757978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.757985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.758420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.758894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.758901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.759362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.759813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.759820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.760318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.760780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.760787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.761267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.761736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.761744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.762229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.762411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.762422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.762892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.763376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.763383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.763849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.764341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.764349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.764863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.765046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.765057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.765543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.765985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.765993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.766470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.766957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.766965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.767426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.767912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.767920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.768404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.768893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.768900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.769248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.769756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.769764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.770226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.770779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.770808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.771300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.771686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.771694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.772160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.772705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.772734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.773089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.773509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.773540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.773894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.774373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.774381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.774858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.774942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.774954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.222 [2024-04-27 02:45:57.775426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.775918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.222 [2024-04-27 02:45:57.775925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.222 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.776377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.776834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.776841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.777301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.777742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.777750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.778233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.778649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.778658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.779181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.779624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.779631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.780112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.780672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.780701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.781177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.781645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.781653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.782139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.782660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.782692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.783139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.783712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.783741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.784222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.784780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.784809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.785271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.785851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.785880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.786501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.787021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.787030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.787616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.788085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.788095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.788646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.789005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.789015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.789467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.789698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.789707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.790194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.790655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.790662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.791034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.791516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.791524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.792005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.792587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.792619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.793090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.793315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.793327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.793768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.794209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.794217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.794770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.795255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.795266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.795838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.796469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.796498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.796971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.797495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.797523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.797977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.798570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.798598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.223 qpair failed and we were unable to recover it. 00:26:24.223 [2024-04-27 02:45:57.799085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.799683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.223 [2024-04-27 02:45:57.799713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.800202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.800754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.800783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.801195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.801683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.801692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.802160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.802594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.802627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.803110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.803693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.803722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.804207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.804762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.804791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.805260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.805784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.805813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.806489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.806855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.806865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.807464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.807936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.807946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.808435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.808935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.808942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.809401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.809827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.809835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.810317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.810780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.810788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.811267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.811760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.811767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.812246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.812675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.812704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.813169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.813608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.813617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.814102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.814681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.814711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.815195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.815456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.815464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.815920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.816501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.816530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.817000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.817391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.817399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.817888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.818379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.818386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.818637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.818976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.818984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.819322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.819794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.819801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.224 qpair failed and we were unable to recover it. 00:26:24.224 [2024-04-27 02:45:57.820262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.224 [2024-04-27 02:45:57.820705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.820713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.821148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.821717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.821746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.822244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.822809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.822838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.823300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.823796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.823805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.824285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.824812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.824820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.825263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.825825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.825854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.826442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.826845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.225 [2024-04-27 02:45:57.826854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.225 qpair failed and we were unable to recover it. 00:26:24.225 [2024-04-27 02:45:57.827081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.827502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.827512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.828016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.828554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.828583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.829081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.829668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.829697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.830225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.830630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.830659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.831150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.831614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.831643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.832106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.832671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.832700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.833186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.833743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.833772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.834223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.834707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.834716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.835047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.835628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.835657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.836130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.836696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.836726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.837214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.837765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.837794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.838151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.838701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.838730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.839227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.839784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.839814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.840184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.840746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.840774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.841134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.841560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.841589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.842039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.842526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.842555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.843040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.843584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.843613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.844084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.844545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.844574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.844939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.845468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.845497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.846018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.846562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.846590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.847039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.847635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.847663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.848134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.848705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.848734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.493 [2024-04-27 02:45:57.849218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.849650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.493 [2024-04-27 02:45:57.849679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.493 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.850163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.850686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.850694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.851219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.851732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.851761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.852239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.852701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.852709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.853194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.853706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.853735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.853957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.854423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.854432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.854883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.855363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.855372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.855786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.856288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.856296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.856747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.857238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.857245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.857725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.858130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.858138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.858558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.859040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.859050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.859499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.860015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.860025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.860610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.861084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.861094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.861649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.862173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.862183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.862644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.863144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.863152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.863700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.864221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.864231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.864823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.865464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.865493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.865969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.866468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.866497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.866986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.867578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.867607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.868079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.868532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.868561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.869049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.869642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.869672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.870156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.870565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.870594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.870947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.871551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.871580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.872050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.872647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.872677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.873165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.873659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.873667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.874026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.874595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.874624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.875003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.875548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.875577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.876047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.876628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.876657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.877085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.877686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.877714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.494 qpair failed and we were unable to recover it. 00:26:24.494 [2024-04-27 02:45:57.878205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.878751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.494 [2024-04-27 02:45:57.878779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.879238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.879793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.879821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.880301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.880796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.880804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.881295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.881796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.881804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.882288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.882739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.882746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.883236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.883687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.883694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.884229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.884734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.884763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.885251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.885773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.885801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.886290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.886819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.886848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.887489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.888011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.888020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.888582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.889052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.889063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.889651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.890134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.890145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.890585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.891129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.891139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.891727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.892250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.892259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.892815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.893299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.893318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.893803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.894258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.894265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.894758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.895250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.895258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.895830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.896481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.896510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.896973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.897518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.897547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.898028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.898572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.898601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.899053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.899470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.899499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.899986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.900458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.900487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.900971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.901214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.901226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.901739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.902229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.902237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.902670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.903036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.903045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.903265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.903721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.903731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.904191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.904732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.904761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.905246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.905811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.905840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.906452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.906970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.906980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.495 qpair failed and we were unable to recover it. 00:26:24.495 [2024-04-27 02:45:57.907560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.908032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.495 [2024-04-27 02:45:57.908042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.908598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.908999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.909009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.909257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.909719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.909728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.910215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.910766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.910795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.911287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.911845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.911873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.912476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.913004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.913014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.913581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.914055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.914065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.914648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.915129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.915139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.915713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.916188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.916198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.916554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.917024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.917035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.917648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.918120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.918129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.918711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.919230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.919240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.919791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.920263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.920273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.920816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.921286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.921298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.921649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.922130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.922141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.922697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.923215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.923228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.923767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.924136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.924146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.924699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.924951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.924965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.925560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.925816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.925830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.926343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.926841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.926848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.927300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.927793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.927800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.928261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.928664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.928672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.929162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.929653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.929661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.930142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.930713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.930742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.931230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.931785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.931814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.932282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.932721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.932753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.933238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.933787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.933816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.934290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.934821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.934849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.935485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.935958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.935968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.496 qpair failed and we were unable to recover it. 00:26:24.496 [2024-04-27 02:45:57.936539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.496 [2024-04-27 02:45:57.937016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.937026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.937600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.937961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.937972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.938559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.939037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.939048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.939622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.940140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.940150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.940708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.941165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.941175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.941668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.942162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.942170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.942535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.943025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.943035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.943519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.944004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.944013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.944573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.945043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.945053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.945635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.946120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.946130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.946708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.947092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.947102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.947677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.948038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.948048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.948620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.949141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.949150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.949712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.950191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.950201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.950746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.951268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.951284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.951848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.952474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.952504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.952978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.953569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.953601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.954086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.954672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.954701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.955188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.955657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.955686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.956162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.956655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.956663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.957021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.957523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.957552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.957902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.958387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.958395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.958848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.959300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.959307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.497 qpair failed and we were unable to recover it. 00:26:24.497 [2024-04-27 02:45:57.959797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.497 [2024-04-27 02:45:57.960293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.960301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.960770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.961262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.961269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.961772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.962263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.962271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.962767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.963259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.963268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.963833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.964357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.964368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.964849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.965448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.965477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.965831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.966331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.966340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.966843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.967334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.967342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.967824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.968317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.968325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.968802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.969159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.969166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.969653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.969986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.969995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.970485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.970978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.970987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.971467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.971957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.971965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.972331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.972787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.972794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.973282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.973732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.973739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.974229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.974751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.974780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.975301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.975663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.975671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.976016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.976501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.976508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.977008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.977361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.977369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.977846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.978335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.978342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.978831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.979144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.979151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.979479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.979885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.979892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.980378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.980851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.980859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.981342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.981729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.981736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.982225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.982713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.982720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.983182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.983632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.983639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.983985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.984334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.984342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.984689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.985144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.985151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.985617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.986107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.498 [2024-04-27 02:45:57.986114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.498 qpair failed and we were unable to recover it. 00:26:24.498 [2024-04-27 02:45:57.986669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.987187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.987198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.987818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.988492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.988521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.989591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.989620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.989842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.990269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.990282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.990784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.991236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.991244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.991815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.992486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.992515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.992739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.993063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.993071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.993549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.993996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.994003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.994463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.994960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.994967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.995515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.995898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.995907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.996392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.996888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.996895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.997377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.997864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.997871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.998242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.998565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.998573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.998898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.999363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:57.999371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:57.999810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.000253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.000261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.000703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.001191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.001199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.001635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.002125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.002132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.002613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.003090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.003100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.003579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.004098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.004107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.004693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.004945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.004959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.005426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.005917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.005924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.006406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.006860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.006868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.007363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.007840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.007848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.008366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.008695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.008703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.009178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.009650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.009658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.010147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.010632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.010640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.011123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.011665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.011693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.012057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.012632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.012661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.013128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.013685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.499 [2024-04-27 02:45:58.013713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.499 qpair failed and we were unable to recover it. 00:26:24.499 [2024-04-27 02:45:58.014204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.014722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.014752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.015199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.015764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.015793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.016283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.016848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.016877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.017469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.017943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.017953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.018292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.018720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.018727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.019206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.019657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.019664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.020130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.020708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.020737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.021094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.021642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.021671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.022156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.022736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.022765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.023211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.023767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.023796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.024290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.024759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.024766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.025232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.025722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.025730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.026079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.026660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.026689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.027181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.027648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.027656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.028147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.028714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.028743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.029215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.029767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.029797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.030304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.030791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.030799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.031289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.031610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.031619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.032050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.032424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.032432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.032899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.033262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.033270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.033745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.034241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.034248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.034794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.035281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.035292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.035765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.036123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.036130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.036692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.037210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.037220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.037806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.038289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.038300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.038850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.039485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.039514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.040003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.040551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.040580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.041053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.041640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.041669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.500 [2024-04-27 02:45:58.042063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.042605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.500 [2024-04-27 02:45:58.042634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.500 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.043083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.043675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.043704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.044035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.044617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.044646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.045120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.045701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.045730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.046215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.046731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.046760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.047251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.047837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.047866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.048467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.048985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.048995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.049224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.049589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.049599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.050122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.050700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.050729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.051217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.051756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.051785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.052269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.052697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.052726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.053193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.053666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.053674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.054170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.054634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.054642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.055126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.055589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.055617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.056102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.056697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.056725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.057238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.057805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.057834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.058455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.058977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.058987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.059538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.060013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.060023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.060599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.061118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.061128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.061722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.062238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.062248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.062811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.063439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.063468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.063956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.064545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.064574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.064799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.065262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.065271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.065742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.066239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.066247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.066799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.067287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.067297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.067751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.068248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.068256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.068836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.069484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.069513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.069857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.070194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.070201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.070667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.071113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.071121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.501 qpair failed and we were unable to recover it. 00:26:24.501 [2024-04-27 02:45:58.071694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.072214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.501 [2024-04-27 02:45:58.072224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.072786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.073173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.073182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.073702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.074147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.074154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.074604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.074858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.074872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.075380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.075838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.075845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.076328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.076771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.076780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.077238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.077728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.077736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.078083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.078655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.078683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.079169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.079599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.079607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.079971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.080201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.080217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.080702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.081169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.081176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.081627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.082117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.082124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.082700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.083173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.083183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.083655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.084155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.084163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.084614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.085058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.085066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.085640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.086117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.086127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.086695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.087169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.087179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.087727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.088249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.088259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.088809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.089195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.089204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.089688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.090186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.090197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.090772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.091301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.091320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.091832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.092227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.092234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.092692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.093185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.093192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.093649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.093873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.093885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.094332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.094810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.094817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.502 qpair failed and we were unable to recover it. 00:26:24.502 [2024-04-27 02:45:58.095298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.502 [2024-04-27 02:45:58.095612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.095620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.095953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.096439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.096447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.096896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.097359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.097366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.097808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.098299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.098306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.098788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.099284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.099296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.099630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.100071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.100080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.100566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.101060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.101067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.101634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.102154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.102164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.102639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.103143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.103151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.103730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.104251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.503 [2024-04-27 02:45:58.104260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.503 qpair failed and we were unable to recover it. 00:26:24.503 [2024-04-27 02:45:58.104817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.105453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.105483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.105991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.106577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.106606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.107097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.107536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.107565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.108036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.108576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.108605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.109056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.109645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.109677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.110040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.110481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.110510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.110995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.111577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.111606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.112076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.112626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.112655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.113149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.113746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.113775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.114263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.114859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.114888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.115111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.115638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.115668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.116215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.116731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.116760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.117247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.117750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.117779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.118260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.118618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.118648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.118859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.119347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.119355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.119828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.120281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.120289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.120743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.121240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.121247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.121738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.122224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.122231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.122777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.123140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.123151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.123621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.123970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.123980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.124536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.125009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.770 [2024-04-27 02:45:58.125019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.770 qpair failed and we were unable to recover it. 00:26:24.770 [2024-04-27 02:45:58.125603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.126126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.126136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.126472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.126986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.126996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.127484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.128002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.128012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.128562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.129082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.129092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.129558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.130076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.130086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.130656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.131185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.131196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.131651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.132167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.132177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.132693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.133143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.133150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.133756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.134236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.134246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.134773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.135302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.135321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.135805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.136035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.136048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.136267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.136724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.136732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.137214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.137650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.137658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.138108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.138666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.138696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.139173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.139645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.139654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.140038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.140608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.140636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.141121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.141702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.141730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.142213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.142753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.142782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.143253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.143835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.143864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.144477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.144948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.144958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.145540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.146063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.146073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.146635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.147139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.147149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.147746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.148269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.148285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.148830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.149467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.149497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.149980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.150571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.150600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.151084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.151548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.151577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.152026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.152619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.152648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.153137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.153688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.153717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.771 [2024-04-27 02:45:58.153935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.154407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.771 [2024-04-27 02:45:58.154416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.771 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.154638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.155101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.155109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.155576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.156028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.156035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.156639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.157157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.157167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.157652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.158154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.158162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.158541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.158771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.158784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.159245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.159560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.159568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.160043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.160541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.160548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.160903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.161390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.161398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.161884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.162371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.162384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.162886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.163355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.163363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.163842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.164331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.164339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.164824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.165312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.165320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.165800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.166290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.166298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.166747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.167239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.167246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.167740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.168229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.168237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.168716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.168798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.168808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.169256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.169708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.169716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.170179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.170660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.170668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.171148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.171696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.171724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.172201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.172744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.172773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.173260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.173854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.173883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.174254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.174687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.174716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.175161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.175644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.175653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.176139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.176532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.176560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.177051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.177639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.177668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.178139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.178692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.178722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.179205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.179749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.179778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.180270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.180855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.772 [2024-04-27 02:45:58.180884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.772 qpair failed and we were unable to recover it. 00:26:24.772 [2024-04-27 02:45:58.181491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.181973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.181983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.182544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.183026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.183036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.183599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.184122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.184131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.184710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.185060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.185071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.185642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.186122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.186132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.186654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.187127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.187138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.187772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.188266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.188275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.188849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.189463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.189492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.189943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.190175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.190187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.190685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.191183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.191191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.191654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.192144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.192151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.192584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.193054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.193063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.193615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.194132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.194143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.194697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.194952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.194967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.195192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.195659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.195668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.196127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.196678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.196706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.197191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.197630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.197639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.198103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.198690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.198719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.199204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.199751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.199780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.200270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.200852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.200881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.201487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.202007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.202017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.202604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.203092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.203101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.203653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.204039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.204049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.204596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.205074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.205084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.205657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.206181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.206191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.206819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.207183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.207194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.207672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.208163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.208171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.208497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.208857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.208865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.209382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.209777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.773 [2024-04-27 02:45:58.209785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.773 qpair failed and we were unable to recover it. 00:26:24.773 [2024-04-27 02:45:58.210130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.210603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.210611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.211089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.211671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.211700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.212151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.212728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.212757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.213241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.213809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.213838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.214489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.214966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.214976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.215195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.215670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.215680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.216006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.216455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.216463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.216950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.217461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.217490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.217714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.218178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.218187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.218391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.218862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.218871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.219365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.219862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.219870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.220313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.220765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.220772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.221317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.221729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.221736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.222216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.222573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.222581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.223061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.223395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.223403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.223856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.224303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.224310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.224772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.225247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.225255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.225674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.225984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.225991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.226444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.226934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.226941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.227422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.227791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.227798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.228259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.228716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.228724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.229206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.229418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.229430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.229906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.230396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.230404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.774 qpair failed and we were unable to recover it. 00:26:24.774 [2024-04-27 02:45:58.230758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.774 [2024-04-27 02:45:58.231220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.231227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.231680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.232168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.232175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.232646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.233146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.233153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.233720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.234204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.234214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.234769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.235238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.235249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.235583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.236088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.236103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.236689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.237171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.237180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.237645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.237979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.237988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.238562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.239051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.239061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.239635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.240144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.240154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.240728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.241113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.241123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.241654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.242178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.242188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.242739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.243212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.243222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.243781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.244482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.244511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.245000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.245600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.245628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.246119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.246637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.246669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.247121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.247709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.247738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.248206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.248749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.248779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.249265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.249846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.249874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.250475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.250822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.250832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.251282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.251738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.251745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.252203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.252743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.252772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.253265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.253852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.253881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.254462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.254866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.254876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.255493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.256014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.256023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.256584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.257046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.257059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.257532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.257896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.257906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.258380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.258853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.258861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.259345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.259835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.775 [2024-04-27 02:45:58.259842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.775 qpair failed and we were unable to recover it. 00:26:24.775 [2024-04-27 02:45:58.260310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.260762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.260769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.261251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.261612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.261619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.262104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.262683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.262713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.263200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.263535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.263544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.263904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.264406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.264414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.264867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.265355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.265363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.265792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.266020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.266034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.266476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.266975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.266982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.267332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.267664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.267671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.268121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.268606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.268614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.268967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.269466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.269473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.269956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.270524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.270552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.270907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.271386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.271394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.271869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.272322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.272330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.272800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.273290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.273298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.273730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.274173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.274181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.274713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.275112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.275120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.275609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.276057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.276064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.276607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.276825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.276838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.277329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.277826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.277834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.278296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.278727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.278735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.279212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.279626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.279633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.279983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.280458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.280466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.280821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.281181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.281188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.281513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.281697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.281708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.282200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.282620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.282627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.283069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.283535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.283543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.283890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.284367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.284375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.776 [2024-04-27 02:45:58.284784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.285230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.776 [2024-04-27 02:45:58.285237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.776 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.285717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.286161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.286168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.286645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.287128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.287135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.287554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.288048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.288057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.288642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.289010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.289020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.289579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.290054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.290064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.290646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.291119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.291128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.291695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.292154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.292164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.292728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.293180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.293189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.293736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.294211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.294221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.294694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.295150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.295158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.295700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.296216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.296226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.296782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.297454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.297483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.298003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.298486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.298515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.298867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.299350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.299358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.299850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.300337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.300344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.300804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.301290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.301298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.301774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.302217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.302225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.302684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.303136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.303144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.303564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.304082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.304092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.304659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.305182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.305192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.305658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.306085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.306093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.306622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.307068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.307078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.307642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.308124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.308133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.308689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.309212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.309221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.309664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.310147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.310157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.310615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.311105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.311113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.311337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.311756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.311765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.312228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.312679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.312688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.777 qpair failed and we were unable to recover it. 00:26:24.777 [2024-04-27 02:45:58.313141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.313726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.777 [2024-04-27 02:45:58.313755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.314098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.314686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.314715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.315199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.315728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.315757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.316234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.316812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.316841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.317488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.317874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.317884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.318347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.318831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.318839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.319286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.319733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.319741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.320030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.320601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.320631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.321119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.321705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.321734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.322224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.322751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.322780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.323245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.323839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.323868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.324472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.324952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.324962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.325466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.325943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.325953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.326547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.327027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.327037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.327595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.328114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.328124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.328689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.329171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.329181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.329647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.330135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.330143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.330710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.331182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.331191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.331738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.332218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.332228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.332753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.333229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.333239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.333824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.334442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.334470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.334925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.335507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.335536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.336021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.336613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.336642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.337116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.337702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.337731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.338214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.338719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.338748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.339076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.339610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.339639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.340121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.340705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.340734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.341088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.341314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.341327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.341781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.342274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.342287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.778 qpair failed and we were unable to recover it. 00:26:24.778 [2024-04-27 02:45:58.342634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.778 [2024-04-27 02:45:58.343125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.343133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.343686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.344175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.344185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.344743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.345264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.345274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.345629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.346120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.346128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.346671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.347195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.347205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.347644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.348128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.348138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.348565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.349086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.349095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.349679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.350030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.350039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.350595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.351121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.351132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.351694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.352168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.352178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.352739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.353257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.353268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.353600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.353962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.353971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.354456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.354978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.354988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.355540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.356044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.356053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.356702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.356959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.356974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.357432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.357913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.357920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.358402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.358895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.358903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.359347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.359812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.359820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.360289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.360762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.360769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.360988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.361453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.361461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.361813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.362292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.362301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.362636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.363127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.363135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.363608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.364105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.364112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.364288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.364783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.364791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.365298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.365796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.365803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.779 qpair failed and we were unable to recover it. 00:26:24.779 [2024-04-27 02:45:58.366297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.779 [2024-04-27 02:45:58.366528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.366538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.367011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.367505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.367513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.367885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.368228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.368235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.368443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.368756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.368763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.368996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.369470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.369477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.369937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.370384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.370397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.370883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.371375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.371383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.371875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.372371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.372379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.372832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.373302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.373309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.373774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.374219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.374226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.374682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.375124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.375131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.375762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.376287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.376297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.376769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.377220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.377228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.377776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.378257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.378267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.378823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.379441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.379471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.379924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.380446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.380475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.380827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.381327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.381641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.382135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.382142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:24.780 [2024-04-27 02:45:58.382628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.383118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.780 [2024-04-27 02:45:58.383126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:24.780 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.383684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.384207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.384216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.384773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.385300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.385318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.385677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.386172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.386179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.386649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.387149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.387157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.387700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.388172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.388182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.388651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.388888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.388901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.389374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.389799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.389806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.390271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.390754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.390765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.391149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.391705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.391734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.392225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.392821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.392850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.393492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.393970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.393980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.394560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.395037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.395046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.395632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.395883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.395897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.396399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.396847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.396854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.397313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.397801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.049 [2024-04-27 02:45:58.397808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.049 qpair failed and we were unable to recover it. 00:26:25.049 [2024-04-27 02:45:58.398292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.398723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.398731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.399170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.399656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.399665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.400111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.400430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.400441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.400782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.401266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.401273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.401754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.402200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.402208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.402645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.403088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.403095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.403617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.404131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.404141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.404559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.405077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.405089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.405671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.406046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.406056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.406609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.407128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.407138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.407698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.408223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.408233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.408794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.409491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.409520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.409871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.410465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.410497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.410973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.411476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.411484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.411971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.412487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.412516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.412992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.413591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.413619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.413845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.414266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.414275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.414764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.415215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.415223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.415686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.416175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.416183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.416389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.416855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.416863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.417345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.417797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.417805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.418289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.418786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.418793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.419237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.419551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.419558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.420030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.420519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.420527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.421014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.421577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.421606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.422092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.422684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.422713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.423166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.423664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.423672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.424202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.424715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.424743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.050 [2024-04-27 02:45:58.425227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.425715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.050 [2024-04-27 02:45:58.425723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.050 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.426070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.426639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.426668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.427158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.427736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.427766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.428237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.428803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.428831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.429178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.429627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.429656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.430207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.430650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.430658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.431135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.431621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.431650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.432108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.432702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.432731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.433193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.433794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.433825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.434287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.434832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.434861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.435489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.436012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.436022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.436588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.436841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.436855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.437094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.437554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.437565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.438139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.438682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.438711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.439195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.439704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.439734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.440209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.440575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.440583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.441040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.441533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.441561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.442008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.442586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.442616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.443103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.443681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.443710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.444182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.444649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.444657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.445140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.445699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.445727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.446101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.446548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.446577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.447064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.447644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.447673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.448131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.448594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.448623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.449073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.449623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.449651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.450130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.450702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.450731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.451227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.451677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.451707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.452068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.452647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.452675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.453163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.453503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.453512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.051 qpair failed and we were unable to recover it. 00:26:25.051 [2024-04-27 02:45:58.453998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.051 [2024-04-27 02:45:58.454537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.454566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.455059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.455592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.455621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.456095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.456554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.456584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.457072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.457557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.457586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.458074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.458551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.458581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.459070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.459693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.459722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.460188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.460729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.460758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.461216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.461657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.461665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.462128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.462704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.462732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.463224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.463633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.463661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.464138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.464621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.464650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.465074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.465638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.465667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.466155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.466870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.466899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.467096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.467454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.467482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.467824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.468283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.468291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.468740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.469150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.469157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.469594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.470119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.470129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.470704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.471198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.471208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.471688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.472219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.472229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.472679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.473202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.473211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.473692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.474187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.474195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.474751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.475264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.475274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.475621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.476113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.476121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.476722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.477234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.477244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.477690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.478077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.478087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.478624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.479153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.479164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.479545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.480042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.480050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.480626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.481113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.481123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.481598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.482077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.482086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.052 qpair failed and we were unable to recover it. 00:26:25.052 [2024-04-27 02:45:58.482658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.052 [2024-04-27 02:45:58.483187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.483196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.483752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.484257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.484268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.484845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.485507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.485536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.486024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.486538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.486567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.486897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.487349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.487358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.487692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.488218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.488225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.488491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.488962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.488969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.489466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.489950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.489958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.490492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.490827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.490835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.491307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.491791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.491799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.492269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.492736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.492744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.492962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.493388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.493397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.493902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.494400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.494407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.494905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.495200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.495208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.495663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.495892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.495903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.496375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.496867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.496875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.497230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.497737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.497745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.498212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.498726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.498734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.499185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.499531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.499539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.499897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.500343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.500350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.500691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.500912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.500921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.501378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.501702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.501710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.501922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.502385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.502393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.502885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.503242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.503250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.503601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.504075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.504082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.504556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.505044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.505051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.053 qpair failed and we were unable to recover it. 00:26:25.053 [2024-04-27 02:45:58.505529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.053 [2024-04-27 02:45:58.506048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.506058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.506648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.507176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.507187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.507658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.508121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.508129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.508700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.509223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.509233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.509790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.510268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.510283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.510856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.511491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.511520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.511840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.512491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.512520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.513025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.513521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.513550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.513918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.514412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.514420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.514885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.515371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.515379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.515851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.516341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.516349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.516865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.517269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.517281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.517751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.518198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.518205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.518630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.519124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.519131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.519700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.520226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.520236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.520792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.521283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.521293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.521867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.522513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.522543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.523034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.523510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.523540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.523892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.524496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.524525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.525014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.525531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.525560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.526015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.526598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.526627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.527093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.527670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.527698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.528147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.528765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.528793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.529266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.529613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.529640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.530116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.530355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.530380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.530820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.531503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.531532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.054 qpair failed and we were unable to recover it. 00:26:25.054 [2024-04-27 02:45:58.532032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.054 [2024-04-27 02:45:58.532522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.532551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.533038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.533631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.533660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.534160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.534761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.534790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.535236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.535840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.535869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.536230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.536766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.536795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.537164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.537717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.537750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.538215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.538697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.538705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.539197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.539733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.539761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.540213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.540443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.540457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.540930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.541387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.541395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.541849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.542260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.542268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.542536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.543006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.543014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.543576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.544066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.544076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.544659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.545155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.545165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.545406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.545786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.545794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.546206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.546641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.546653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.547137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.547705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.547734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.548082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.548682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.548711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.549189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.549534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.549543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.550010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.550568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.550598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.551054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.551621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.551650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.552113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.552716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.552744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.553238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.553569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.553598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.554090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.554686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.554714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.555079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.555633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.555662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.556202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.556771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.556803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.557260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.557810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.557839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.558484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.558969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.558979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.559596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.560095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.560105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.055 qpair failed and we were unable to recover it. 00:26:25.055 [2024-04-27 02:45:58.560658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.055 [2024-04-27 02:45:58.561025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.561035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.561493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.561976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.561986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.562545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.563041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.563051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.563551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.564022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.564030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.564392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.564848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.564855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.565225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.565701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.565709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.566177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.566627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.566638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 281209 Killed "${NVMF_APP[@]}" "$@" 00:26:25.056 [2024-04-27 02:45:58.567116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.567682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.567712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 02:45:58 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:25.056 02:45:58 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:25.056 [2024-04-27 02:45:58.568187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 02:45:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:25.056 02:45:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:25.056 [2024-04-27 02:45:58.568653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.568661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 02:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.056 [2024-04-27 02:45:58.569152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.569706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.569735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.570091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.570672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.570701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.571074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.571557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.571586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.572031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.572506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.572535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.572910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.573483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.573511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.573850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.574145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.574154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.574630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.575097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.575108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.575682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 02:45:58 -- nvmf/common.sh@470 -- # nvmfpid=282096 00:26:25.056 [2024-04-27 02:45:58.576174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.576185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 02:45:58 -- nvmf/common.sh@471 -- # waitforlisten 282096 00:26:25.056 02:45:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:25.056 02:45:58 -- common/autotest_common.sh@817 -- # '[' -z 282096 ']' 00:26:25.056 [2024-04-27 02:45:58.576727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 02:45:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.056 [2024-04-27 02:45:58.577026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.577037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 02:45:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:25.056 02:45:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.056 [2024-04-27 02:45:58.577503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 02:45:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:25.056 02:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.056 [2024-04-27 02:45:58.577902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.577912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.578371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.578756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.578765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.579194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.579655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.579664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.580094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.580562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.580571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.581023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.581541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.581569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.582067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.582632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.582666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.583143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.583719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.583749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.056 [2024-04-27 02:45:58.584215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.584760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.056 [2024-04-27 02:45:58.584789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.056 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.585244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.585691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.585720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.586220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.586612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.586642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.587111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.587308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.587330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.587910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.588512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.588543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.589044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.589621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.589648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.590141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.590776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.590803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.591236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.591730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.591757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.592250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.592800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.592831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.593236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.593875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.593902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.594503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.594908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.594916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.595525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.596046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.596055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.596639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.597158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.597167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.597540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.598047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.598053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.598706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.599245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.599253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.599547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.600072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.600081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.600568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.601119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.601128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.601594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.602099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.602108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.602580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.603134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.603143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.603651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.603927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.603937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.604599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.604956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.604966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.605457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.605949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.605956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.606567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.607098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.607107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.607688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.608089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.608098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.608683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.609177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.609186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.609715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.610213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.610219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.610819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.611484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.611512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.612035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.612662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.612690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.613223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.613687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.613715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.057 [2024-04-27 02:45:58.614222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.614499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.057 [2024-04-27 02:45:58.614506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.057 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.614928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.615185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.615191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.615682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.616137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.616144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.616531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.617103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.617112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.617585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.618066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.618076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.618660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.619207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.619216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.619828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.620508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.620536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.620908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.621142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.621153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.621724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.622211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.622220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.622616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.623069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.623075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.623609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.624095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.624104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.624679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.625216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.625224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.625809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.626503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.626530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.626984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.627566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.627593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.628109] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:25.058 [2024-04-27 02:45:58.628140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.628153] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.058 [2024-04-27 02:45:58.628606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.628634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.628972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.629532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.629559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.630070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.630578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.630606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.631112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.631310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.631322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.631771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.631958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.631968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.632587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.632861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.632870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.633341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.633852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.633859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.634214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.634713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.634720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.635190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.635648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.635655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.636162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.636600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.636628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.058 [2024-04-27 02:45:58.637098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.637690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.058 [2024-04-27 02:45:58.637717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.058 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.638229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.638763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.638792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.639305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.639549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.639561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.640055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.640550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.640558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.640811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.641286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.641292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.641779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.642115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.642121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.642449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.642823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.642829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.643322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.643648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.643654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.644138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.644578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.644584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.644912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.645409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.645416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.645912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.646410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.646417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.646912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.647372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.647379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.647607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.647965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.647971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.648458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.648927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.648933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.649428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.649931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.649937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.650270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.650744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.650751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.651141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.651808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.651836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.652239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.652791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.652820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.653484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.653909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.653918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.654506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.655038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.655047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.655640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.656173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.656181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.656455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.656931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.656937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.657442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.657677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.657684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.658255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.658729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.658736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 [2024-04-27 02:45:58.659220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.659693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.659700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.059 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.059 [2024-04-27 02:45:58.660191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.660668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.059 [2024-04-27 02:45:58.660674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.059 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.661043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.661472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.661499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.662012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.662627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.662656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.663233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.663697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.663705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.664207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.664756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.664784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.665496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.666026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.666034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.666715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.667255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.667263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.667865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.668484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.668512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.669015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.669457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.669485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.669760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.669958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.669965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.670349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.670852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.670858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.671346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.671806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.671812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.672254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.672726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.672733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.673096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.673543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.673570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.673750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.674123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.674130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.674612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.675122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.675128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.675697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.676228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.676237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.676663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.677191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.677200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.677598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.677982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.677989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.678185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.678677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.678684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.327 [2024-04-27 02:45:58.679167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.679542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.327 [2024-04-27 02:45:58.679549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.327 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.680039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.680538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.680566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.681070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.681655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.681682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.682085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.682667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.682694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.683029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.683728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.683755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.684261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.684850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.684877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.685225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.685801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.685828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.686461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.686988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.686997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.687588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.688142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.688150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.688633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.688997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.689005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.689599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.690094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.690103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.690669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.691200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.691209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.691780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.692500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.692528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.693026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.693661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.693689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.694189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.694649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.694656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.695009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.695236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.695247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.695715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.695985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.695991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.696580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.697111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.697119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.697711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.698285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.698294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.698836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.699516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.699544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.700044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.700653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.700681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.701195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.701799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.701827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.702461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.702995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.703004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.703523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.703917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.703925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.704304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.704664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.704671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.705144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.705538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.705544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.706035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.706467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.706495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.706725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.707219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.707226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.328 qpair failed and we were unable to recover it. 00:26:25.328 [2024-04-27 02:45:58.707715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.328 [2024-04-27 02:45:58.708209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.708216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.708569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.709091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.709097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.709701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.710239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.710251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.710684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.711086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.711095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.711677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.712209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.712218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.712272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.329 [2024-04-27 02:45:58.712442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.712869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.712876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.713320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.713660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.713667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.714170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.714664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.714671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.715022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.715518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.715525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.716019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.716104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.716116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.716356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.716826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.716834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.717335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.717838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.717844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.718217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.718722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.718729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.719285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.719589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.719595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.720125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.720623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.720630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.721107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.721691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.721719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.722222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.722809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.722836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.723076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.723618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.723646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.724152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.724750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.724778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.725496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.725731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.725739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.726240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.726711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.726718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.727077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.727647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.727675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.728171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.728718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.728726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.729239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.729693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.729720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.730189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.730662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.730669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.731158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.731766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.731794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.732292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.732846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.732853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.733224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.733705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.733712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.329 [2024-04-27 02:45:58.734205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.734739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.329 [2024-04-27 02:45:58.734767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.329 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.735258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.735737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.735765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.736259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.736819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.736847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.737475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.737989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.737998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.738605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.739075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.739084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.739657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.740171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.740180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.740612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.741044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.741050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.741626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.742142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.742151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.742758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.743285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.743294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.743885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.744520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.744548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.745042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.745631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.745659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.746032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.746619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.746647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.747139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.747695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.747723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.748216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.748599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.748626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.749118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.749706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.749738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.750229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.750779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.750807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.751166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.751552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.751559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.751978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.752573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.752600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.753005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.753253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.753260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.753643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.754122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.754129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.754677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.755194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.755203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.755555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.755791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.755802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.756310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.756800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.756807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.757359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.757837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.757843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.758304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.758512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.758521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.758975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.759477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.759484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.759945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.760393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.760400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.760845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.761212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.761219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.761688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.762143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.762149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.330 [2024-04-27 02:45:58.762493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.763035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.330 [2024-04-27 02:45:58.763045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.330 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.763631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.764170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.764179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.764703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.765182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.765189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.765652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.766029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.766035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.766612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.767124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.767133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.767685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.768147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.768159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.768638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.769139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.769145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.769706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.770218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.770227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.770797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.771465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.771492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.771972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.772307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.772324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.772695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.773218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.773224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.773728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.774219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.774225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.774694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.775153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.775160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.775711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.776086] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.331 [2024-04-27 02:45:58.776112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.331 [2024-04-27 02:45:58.776121] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.331 [2024-04-27 02:45:58.776128] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.331 [2024-04-27 02:45:58.776135] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.331 [2024-04-27 02:45:58.776220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.776229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.776297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:25.331 [2024-04-27 02:45:58.776466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:25.331 [2024-04-27 02:45:58.776626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:25.331 [2024-04-27 02:45:58.776627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:25.331 [2024-04-27 02:45:58.776760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.777173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.777183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.777768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.778482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.778510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.778887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.779394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.779401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.779925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.780359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.780367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.780838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.781351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.781358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.781826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.782299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.782306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.782861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.783288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.783295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.331 qpair failed and we were unable to recover it. 00:26:25.331 [2024-04-27 02:45:58.783404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.331 [2024-04-27 02:45:58.783856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.783863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.784105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.784565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.784572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.784900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.785288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.785296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.785755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.786253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.786259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.786770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.787229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.787235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.787651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.788173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.788182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.788608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.788826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.788836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.789326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.789818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.789824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.790176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.790649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.790656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.791019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.791512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.791518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.792003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.792431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.792438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.792899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.793237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.793244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.793700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.794207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.794213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.794671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.795165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.795171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.795545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.796014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.796024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.796355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.796798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.796806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.797282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.797564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.797570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.798057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.798653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.798681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.799031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.799579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.799606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.799985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.800319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.800326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.800657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.801160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.801166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.801601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.802108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.802114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.802597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.802964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.802973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.803532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.804062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.804071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.804663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.805170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.805179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.805659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.806154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.806161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.806415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.806907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.806913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.807145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.807598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.807605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.807923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.808515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.808543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.332 qpair failed and we were unable to recover it. 00:26:25.332 [2024-04-27 02:45:58.809040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.332 [2024-04-27 02:45:58.809520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.809548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.810044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.810646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.810673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.810954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.811286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.811293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.811856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.812093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.812103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.812642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.813167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.813176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.813657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.814153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.814160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.814743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.815142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.815151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.815725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.816215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.816223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.816765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.817124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.817133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.817695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.817832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.817841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.818387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.818648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.818654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.819147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.819636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.819642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.820118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.820699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.820727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.821224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.821555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.821587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.821831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.822343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.822350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.822836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.823325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.823331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.823811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.824183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.824189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.824701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.825190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.825196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.825470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.825802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.825808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.826239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.826705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.826712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.826968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.827303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.827310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.827598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.828066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.828072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.828535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.828988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.828994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.829446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.829684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.829692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.830149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.830622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.830628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.831077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.831522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.831551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.832048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.832623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.832650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.833195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.833423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.833430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.833920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.834383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.834389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.333 qpair failed and we were unable to recover it. 00:26:25.333 [2024-04-27 02:45:58.834850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.333 [2024-04-27 02:45:58.835183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.835189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.835634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.835964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.835970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.836419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.836913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.836920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.837405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.837876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.837883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.838105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.838432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.838438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.838778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.839298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.839305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.839792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.840292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.840299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.840511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.841012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.841017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.841266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.841762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.841769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.842218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.842682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.842688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.843040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.843598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.843625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.844116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.844680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.844708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.845202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.845748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.845775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.846265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.846820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.846848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.847459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.847708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.847720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.847955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.848418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.848425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.848896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.849137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.849150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.849359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.849607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.849617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.849986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.850441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.850447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.850896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.851343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.851349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.851629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.852116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.852122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.852608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.853074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.853080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.853679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.854213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.854223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.854701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.855164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.855170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.855644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.855977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.855983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.856574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.856944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.856954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.857146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.857606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.857613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.858073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.858621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.858649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.859106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.859675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.334 [2024-04-27 02:45:58.859702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.334 qpair failed and we were unable to recover it. 00:26:25.334 [2024-04-27 02:45:58.860195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.860670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.860698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.860949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.861320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.861328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.861781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.862248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.862254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.862746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.863206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.863213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.863437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.863936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.863943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.864389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.864850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.864856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.865375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.865493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.865503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.865998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.866319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.866326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.866805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.867165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.867171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.867655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.868111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.868118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.868554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.868925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.868931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.869104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.869472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.869480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.869976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.870201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.870209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.870438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.870796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.870802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.871254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.871752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.871759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.872213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.872699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.872706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.873102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.873476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.873504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.874002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.874561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.874588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.874925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.875156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.875162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.875333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.875872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.875878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.876325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.876789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.876795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.877208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.877559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.877565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.878043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.878543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.878550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.879005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.879260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.879267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.879736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.880190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.880196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.880732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.881221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.881230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.881710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.882164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.882171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.335 qpair failed and we were unable to recover it. 00:26:25.335 [2024-04-27 02:45:58.882722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.335 [2024-04-27 02:45:58.883487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.883515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.883748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.884214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.884221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.884704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.885195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.885201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.885675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.886119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.886125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.886220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.886354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.886365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.886756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.887212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.887218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.887758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.888185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.888193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.888405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.888894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.888900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.889319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.889756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.889762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.890216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.890680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.890686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.891134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.891678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.891706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.892076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.892598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.892626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.892847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.893319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.893326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.893806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.894227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.894234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.894717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.895168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.895174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.895651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.896106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.896112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.896674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.897164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.897173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.897503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.897771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.897777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.898103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.898371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.898378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.898762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.899244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.899250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.899702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.900156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.900162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.900558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.900782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.900788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.901288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.901793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.901800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.902048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.902477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.902483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.336 [2024-04-27 02:45:58.903011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.903482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.336 [2024-04-27 02:45:58.903509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.336 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.903868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.904244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.904250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.904741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.905201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.905208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.905662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.905925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.905930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.906158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.906239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.906251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.906471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.906929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.906936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.907384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.907610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.907616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.908087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.908557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.908563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.908776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.909268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.909274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.909736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.910197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.910204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.910679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.911047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.911053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.911583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.912039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.912048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.912565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.912940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.912948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.913426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.913893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.913899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.914356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.914832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.914838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.915283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.915755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.915761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.916212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.916681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.916688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.916935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.917381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.917388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.917655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.917874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.917880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.918351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.918841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.918847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.919263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.919624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.919630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.920044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.920520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.920527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.920862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.921084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.921090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.921557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.922010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.922017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.922466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.922924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.922930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.923153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.923696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.923724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.924192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.924659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.924667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.925006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.925564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.925591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.926100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.926546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.926573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.337 [2024-04-27 02:45:58.926908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.927366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.337 [2024-04-27 02:45:58.927373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.337 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.927842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.928100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.928107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.928504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.929001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.929007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.929561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.930054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.930063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.930600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.930997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.931006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.931556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.931841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.931850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.932349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.932816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.932822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.933243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.933705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.933711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.934182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.934651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.934657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.935110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.935674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.935702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.936199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.936689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.936697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.937017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.937282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.937288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.937856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.938486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.938514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.338 [2024-04-27 02:45:58.938966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.939521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.338 [2024-04-27 02:45:58.939549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.338 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.940076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.940621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.940649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.940982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.941215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.941226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.941755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.942263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.942273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.942695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.943112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.943119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.943695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.944094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.944103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.944587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.944949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.944958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.945526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.945810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.945820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.946320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.946801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.605 [2024-04-27 02:45:58.946808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.605 qpair failed and we were unable to recover it. 00:26:25.605 [2024-04-27 02:45:58.947263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.947740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.947747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.948202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.948394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.948406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.948956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.949408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.949415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.949862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.950312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.950319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.950808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.951269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.951282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.951820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.952167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.952173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.952529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.953026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.953032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.953587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.953958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.953967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.954511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.954958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.954964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.955457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.955947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.955955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.956410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.956883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.956889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.957352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.957854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.957861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.958079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.958550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.958558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.958888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.959345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.959352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.959603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.960053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.960062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.960511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.960917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.960923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.961370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.961830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.961836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.962294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.962640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.962646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.963149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.963492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.963500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.963836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.964114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.964120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.964625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.965085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.965091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.965758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.966015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.966024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.966560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.966811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.966823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.967193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.967310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.967316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.967781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.968148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.968158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.968655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.969118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.969125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.969708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.970197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.970206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.970669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.971132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.606 [2024-04-27 02:45:58.971138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.606 qpair failed and we were unable to recover it. 00:26:25.606 [2024-04-27 02:45:58.971704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.972078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.972087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.972728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.973091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.973100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.973530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.974023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.974032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.974587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.974873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.974883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.975376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.975831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.975837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.976061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.976260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.976270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.976513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.976993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.976999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.977454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.977915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.977921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.978167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.978647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.978654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.979015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.979500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.979507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.979956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.980406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.980413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.980865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.981053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.981060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.981471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.981965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.981971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.982333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.982711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.982717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.983161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.983728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.983734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.984187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.984652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.984679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.985176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.985651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.985658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.986119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.986663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.986690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.987144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.987572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.987600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.988095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.988491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.988519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.989025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.989570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.989598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.990092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.990634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.990662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.991151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.991603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.991631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.991854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.992297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.992306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.992680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.993135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.993141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.993588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.994042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.994048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.994622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.995116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.995125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.607 [2024-04-27 02:45:58.995707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.996192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.607 [2024-04-27 02:45:58.996201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.607 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:58.996560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.997077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.997085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:58.997630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.998122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.998131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:58.998688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.999184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.999193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:58.999681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.999904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:58.999913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.000288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.000659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.000666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.001110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.001603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.001611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.002051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.002497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.002524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.003021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.003486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.003513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.004015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.004272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.004292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.004775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.005242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.005248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.005800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.006163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.006172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.006734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.007229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.007239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.007799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.008486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.008513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.008646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.009164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.009171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.009655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.010030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.010036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.010264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.010729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.010736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.011147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.011704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.011732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.012233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.012659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.012687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.013182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.013552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.013559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.014062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.014623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.014650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.015141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.015697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.015725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.016270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.016673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.016701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.017178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.017754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.017782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.018112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.018553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.018581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.018931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.019542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.019570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.020071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.020592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.020620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.020995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.021581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.021608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.022101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.022655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.022683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.608 qpair failed and we were unable to recover it. 00:26:25.608 [2024-04-27 02:45:59.023024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.023615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.608 [2024-04-27 02:45:59.023642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.024005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.024568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.024596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.025063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.025296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.025309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.025708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.026249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.026255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.026503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.026967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.026973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.027420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.027761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.027767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.028226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.028687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.028693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.029144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.029613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.029641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.029883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.030365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.030372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.030887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.031364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.031371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.031858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.031937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.031947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.032287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.032634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.032641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.033108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.033222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.033228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.033630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.034082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.034088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.034536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.034987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.034993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.035439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.035671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.035683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.036173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.036510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.036516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.037007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.037384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.037391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.037851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.038322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.038328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.038583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.039044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.039050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.039178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.039291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.039297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.039762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.040221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.040227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.041148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.041154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.041445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.041947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.041953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.042368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.042799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.042805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.043258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.043634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.043640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.043970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.044300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.044307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.044800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.045164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.045169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.045632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.046082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.046088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.609 qpair failed and we were unable to recover it. 00:26:25.609 [2024-04-27 02:45:59.046347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.609 [2024-04-27 02:45:59.046848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.046854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.046945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.047165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.047171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.047627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.048145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.048151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.048469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.048940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.048946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.049391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.049845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.049851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.050338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.050779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.050785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.051233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.051655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.051662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.051892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.052229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.052235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.052721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.053179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.053185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.053653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.054106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.054113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.054647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.055137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.055147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.055740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.056234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.056243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.056526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.057048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.057057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.057290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.057748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.057754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.058206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.058465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.058472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.058826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.059280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.059287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.059643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.059868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.059875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.060252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.060710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.060717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.061066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.061689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.061717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.061980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.062458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.062465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.062683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.063175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.063182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.063402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.063841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.063847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.064340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.064800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.064806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.610 [2024-04-27 02:45:59.065257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.065604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.610 [2024-04-27 02:45:59.065610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.610 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.066073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.066570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.066578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.066907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.067407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.067413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.067817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.068141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.068147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.068595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.069045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.069051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.069481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.069970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.069979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.070563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.071056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.071064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.071608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.071966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.071975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.072489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.072747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.072755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.073224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.073693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.073701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.074163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.074635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.074642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.075172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.075621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.075627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.076092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.076644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.076672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.077117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.077701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.077729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.077944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.078371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.078379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.078852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.079306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.079313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.079848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.080302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.080308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.080829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.081284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.081291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.081745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.082249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.082256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.082832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.083223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.083236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.083839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.084501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.084529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.085025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.085221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.085228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.085568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.086022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.086029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.086613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.087104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.087113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.087212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.087484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.087490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.087985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.088442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.088449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.088901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.089240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.089246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.089482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.089988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.089994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.090453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.090875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.090882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.611 qpair failed and we were unable to recover it. 00:26:25.611 [2024-04-27 02:45:59.091329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.611 [2024-04-27 02:45:59.091831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.091840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.091957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.092458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.092465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.092912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.093365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.093371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.093737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.094081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.094089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.094581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.095129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.095136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.095597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.095823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.095832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.096067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.096579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.096586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.097033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.097267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.097287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.097734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.098284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.098291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.098534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.098987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.098993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.099548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.100038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.100050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.100498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.100749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.100762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.101113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.101611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.101618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.101734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.102071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.102077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.102500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.102701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.102707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.103243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.103742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.103749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.104200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.104670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.104677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.105122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.105668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.105696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.106191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.106639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.106646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.107110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.107599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.107627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.107888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.108223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.108230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.108558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.108943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.108949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.109092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.109599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.109606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.110062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.110639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.110667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.110890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.111335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.111343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.111714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.112179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.112186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.112569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.113076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.113083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.113574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.114025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.114031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.114574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.115066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.115075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.612 qpair failed and we were unable to recover it. 00:26:25.612 [2024-04-27 02:45:59.115623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.612 [2024-04-27 02:45:59.116122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.116131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.116502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.116833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.116841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.117048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.117175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.117181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.117629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.118080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.118087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.118538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.118997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.119003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.119256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.119749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.119756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.120209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.120571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.120599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.120854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.121085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.121097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.121613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.122072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.122078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.122627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.123123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.123132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.123593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.123855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.123864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.124369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.124854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.124860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.125282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.125546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.125552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.126011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.126562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.126589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.127083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.127627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.127655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.128150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.128692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.128719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.129050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.129516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.129544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.130042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.130641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.130669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.131165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.131655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.131663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.132190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.132406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.132418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.132659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.132916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.132923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.133418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.133739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.133746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.133842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.134270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.134279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.134746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.135164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.135171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.135661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.135855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.135861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.136360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.136703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.136710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.137033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.137487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.137493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.137945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.138163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.138173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.138533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.138985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.138991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.613 qpair failed and we were unable to recover it. 00:26:25.613 [2024-04-27 02:45:59.139435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.613 [2024-04-27 02:45:59.139932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.139939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.140432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.140786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.140792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.141270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.141754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.141760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.142087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.142639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.142667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.143137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.143570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.143597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.143772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.144153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.144159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.144563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.145138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.145144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.145747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.146484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.146512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.147030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.147265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.147272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.147822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.148492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.148520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.149015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.149594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.149622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.150118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.150671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.150698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.151198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.151758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.151786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.151924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.152379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.152386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.152850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.153322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.153329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.153786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.154235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.154241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.154733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.155219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.155225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.155449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.155888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.155894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.156341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.156757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.156763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.157222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.157479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.157487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.157967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.158420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.158427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.158875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.159327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.159333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.159772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.160211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.160217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.160568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.160920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.160927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.161273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.161508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.614 [2024-04-27 02:45:59.161523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.614 qpair failed and we were unable to recover it. 00:26:25.614 [2024-04-27 02:45:59.161885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.162338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.162345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.162792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.163259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.163265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.163716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.164165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.164172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.164387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.164668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.164675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.165133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.165592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.165599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.166088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.166683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.166712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.167213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.167697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.167704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.168218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.168570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.168597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.169123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.169649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.169676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.170172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.170531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.170538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.171044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.171509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.171536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.171895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.172241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.172248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.172733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.173189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.173196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.173669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.174149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.174156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.174736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.175227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.175237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.175566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.176055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.176066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.176648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.177141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.177150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.177695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.177985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.177994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.178246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.178807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.178834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.179438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.179987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.179996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.180548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.180833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.180843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.181014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.181363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.181370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.181721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.182194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.182200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.182685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.182869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.182881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.183362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.183750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.183756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.184011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.184465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.184472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.184924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.185375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.185381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.185848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.186299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.186305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.615 [2024-04-27 02:45:59.186730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.187194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.615 [2024-04-27 02:45:59.187200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.615 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.187661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.188116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.188122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.188573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.188908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.188915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.189368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.189585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.189592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.190053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.190515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.190522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.191008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.191474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.191480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.191927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.192155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.192168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.192653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.192870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.192879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.193309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.193569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.193576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.194079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.194536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.194542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.194988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.195442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.195449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.195896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.196331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.196338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.196688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.197184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.197190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.197533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.197948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.197954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.198178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.198598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.198605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.199079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.199314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.199324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.199810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.200267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.200273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.200609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.201156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.201166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.201704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.201821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.201827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.202290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.202741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.202747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.203121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.203564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.203570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.203892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.204228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.204235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.204696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.205250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.205256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.205763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.206216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.206223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.206783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.207038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.207051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.207273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.207768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.207775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.208228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.208792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.208820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.209486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.209772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.209781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.210285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.210623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.210629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.616 [2024-04-27 02:45:59.211030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.211591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.616 [2024-04-27 02:45:59.211619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.616 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.211950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.212405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.212415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.212847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.213380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.213387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.213842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.214298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.214305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.214820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.215043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.215055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.215519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.215771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.215777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.216283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.216751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.216756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.217209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.217670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.617 [2024-04-27 02:45:59.217677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.617 qpair failed and we were unable to recover it. 00:26:25.617 [2024-04-27 02:45:59.218164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.218793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.218822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.884 qpair failed and we were unable to recover it. 00:26:25.884 [2024-04-27 02:45:59.219490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.219984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.219994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.884 qpair failed and we were unable to recover it. 00:26:25.884 [2024-04-27 02:45:59.220552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.220706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.220716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.884 qpair failed and we were unable to recover it. 00:26:25.884 [2024-04-27 02:45:59.221219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.221677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.221687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.884 qpair failed and we were unable to recover it. 00:26:25.884 [2024-04-27 02:45:59.222132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.222718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.884 [2024-04-27 02:45:59.222745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.884 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.223112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.223594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.223622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.224076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.224652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.224680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.225011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.225469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.225497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.225995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.226549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.226577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.226832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.227248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.227254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.227580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.228038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.228044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.228272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.228501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.228507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.228951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.229543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.229571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.230091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.230531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.230562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.231096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.231667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.231694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.232210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.232713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.232740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.233194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.233693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.233701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.233819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.234255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.234261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.234729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.235193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.235201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.235661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.236112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.236118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.236693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.237263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.237272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.237831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.237946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.237958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.238434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.238910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.238918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.239164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.239623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.239633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.239962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.240434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.240441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.240770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.241174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.241179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.241472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.241902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.241908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.242362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.242869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.242875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.243090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.243544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.243551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.244001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.244495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.244501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.244928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.245265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.885 [2024-04-27 02:45:59.245272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.885 qpair failed and we were unable to recover it. 00:26:25.885 [2024-04-27 02:45:59.245633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.245961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.245967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.246314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.246786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.246792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.247244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.247712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.247718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.248169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.248645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.248652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.249099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.249620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.249647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.249998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.250488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.250515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.251008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.251555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.251582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.251838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.252310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.252317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.252789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.253207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.253213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.253476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.253799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.253806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.254272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.254758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.254765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.255221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.255447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.255453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.255781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.256035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.256042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.256542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.256993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.256999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.257215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.257687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.257695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.258203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.258667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.258674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.258923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.259149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.259155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.259633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.260052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.260059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.260616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.261112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.261121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.261543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.261990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.262000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.262575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.263065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.263073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.263650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.264189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.264198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.264556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.264925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.264932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.265521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.266012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.266020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.266565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.267060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.267068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.267480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.267875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.267884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.886 qpair failed and we were unable to recover it. 00:26:25.886 [2024-04-27 02:45:59.268343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.886 [2024-04-27 02:45:59.268786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.268793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.269241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.269712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.269720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.270175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.270651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.270658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.270878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.271338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.271346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.271816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.272271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.272280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.272600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.272966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.272972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.273307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.273543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.273549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.274025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.274475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.274482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.274944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.275208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.275214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.275681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.276130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.276136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.276552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.276866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.276875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.277332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.277815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.277822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.278041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.278537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.278544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.278889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.279341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.279348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.279801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.280257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.280263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.280380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.280881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.280888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.281247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.281732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.281738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.282222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.282684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.282690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.283185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.283632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.283638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.284084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.284631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.284659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.285112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.285667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.285694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.285927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.286520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.286547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.286799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.287273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.287285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.287782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.288174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.288180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.288653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.289198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.289204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.289688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.290134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.290140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.290686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.291187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.887 [2024-04-27 02:45:59.291196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.887 qpair failed and we were unable to recover it. 00:26:25.887 [2024-04-27 02:45:59.291570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.292108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.292118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.292529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.293051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.293060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.293637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.293929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.293939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.294539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.295080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.295088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.295538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.295820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.295829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.296273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.296604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.296610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.296700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.297157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.297163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.297699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.297957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.297964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.298314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.298816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.298823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.299343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.299800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.299806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.300258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.300486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.300504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.301002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.301457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.301464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.301918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.302337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.302344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.302848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.303186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.303192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.303395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.303908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.303914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.304368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.304818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.304824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.305273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.305493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.305499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.305949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.306404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.306418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.306760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.307230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.307236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.307603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.307852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.307858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.308318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.308781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.308787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.309312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.309442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.309448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.309823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.310040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.310049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.310251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.310718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.310725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.311174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.311643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.311649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.312102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.312502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.312508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.312961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.313458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.313465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.313911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.314363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.314369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.888 qpair failed and we were unable to recover it. 00:26:25.888 [2024-04-27 02:45:59.314715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.888 [2024-04-27 02:45:59.315025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.315032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.315285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.315750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.315756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.316202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.316641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.316647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.316870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.317345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.317351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.317854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.318348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.318354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.318856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.319303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.319310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.319648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.319986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.319992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.320360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.320596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.320603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.321073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.321528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.321535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.321782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.322223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.322229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.322689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.323149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.323155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.323609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.324062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.324069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.324500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.324967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.324977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.325288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.325754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.325760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.326370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.326716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.326723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.327065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.327533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.327539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.327990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.328369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.328376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.328838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.329070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.329076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.329543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.329881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.329888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.330105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.330592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.330598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.331048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.331602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.331630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.331886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.332365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.332372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.332852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.333196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.333203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.889 qpair failed and we were unable to recover it. 00:26:25.889 [2024-04-27 02:45:59.333557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.889 [2024-04-27 02:45:59.333915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.333922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.334377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.334606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.334613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.335087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.335433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.335440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.335797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.336045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.336052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.336511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.336962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.336969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.337423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.337883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.337889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.338237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.338434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.338446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.338882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.339329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.339336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.339588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.339717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.339724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.340191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.340730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.340737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.341187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.341657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.341665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.342109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.342297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.342308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.342511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.342848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.342855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.343231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.343752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.343759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.344204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.344637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.344645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.345091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.345475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.345503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.346044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.346608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.346636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.347127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.347690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.347718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.348224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.348311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.348321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.348547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.348997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.349008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.349607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.350090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.350099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.350689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.351262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.351271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.351762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.352140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.352149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.352710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.353270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.353285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.353627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.353935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.353944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.354201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.354646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.354653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.355105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.355662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.355690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.356053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.356299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.356314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.356781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.357232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.890 [2024-04-27 02:45:59.357238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.890 qpair failed and we were unable to recover it. 00:26:25.890 [2024-04-27 02:45:59.357688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.358185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.358195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.358670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.359117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.359124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.359665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.359948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.359958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.360550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.360948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.360957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.361288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.361506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.361512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.361984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.362331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.362338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.362627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.362950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.362956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.363404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.363742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.363749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.364208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.364467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.364474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.364882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.365418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.365425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.365771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.366141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.366150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.366370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.366700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.366706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.366966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.367352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.367360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.367582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.367998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.368006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.368234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.368668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.368676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.369113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.369537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.369544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.370040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.370583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.370612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.370945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.371443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.371452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.371883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.372383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.372390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.372880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.373260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.373267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.373798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.374149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.374159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.374716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.375242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.375252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.375848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.376000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.376011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.376589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.377069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.377079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.377700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.377966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.377977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.378477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.378979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.378987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.379496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.380000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.380007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.380496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.380611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.380617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.891 qpair failed and we were unable to recover it. 00:26:25.891 [2024-04-27 02:45:59.381076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.891 [2024-04-27 02:45:59.381435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.381443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.381893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.382346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.382353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.382765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.383006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.383013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.383241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.383694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.383701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.384076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.384169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.384175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.384333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.384815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.384821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.385262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.385682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.385688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.386183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.386695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.386702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.386938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.387253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.387259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.387628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.388088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.388095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.388680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.388948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.388957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.389469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.389937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.389945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.390432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.390649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.390656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.391146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.391613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.391621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.392099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.392475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.392504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.392818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.393153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.393162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.393659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.393785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.393794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.394272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.394729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.394737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.395230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.395691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.395700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.396080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.396550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.396579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.397056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.397655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.397684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.398150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.398760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.398789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.399241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 02:45:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:25.892 [2024-04-27 02:45:59.399791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 02:45:59 -- common/autotest_common.sh@850 -- # return 0 00:26:25.892 [2024-04-27 02:45:59.399820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.400051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 02:45:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:25.892 [2024-04-27 02:45:59.400232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.400244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 02:45:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:25.892 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.892 [2024-04-27 02:45:59.400701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.400781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.400791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.892 qpair failed and we were unable to recover it. 00:26:25.892 [2024-04-27 02:45:59.401272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.401749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.892 [2024-04-27 02:45:59.401756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.402016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.402554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.402561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.403014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.403247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.403253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.403686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.404178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.404185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.404672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.405136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.405144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.405635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.405911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.405919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.406498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.407024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.407033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.407615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.407982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.407991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.408216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.408523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.408530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.409004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.409507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.409514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.409765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.410123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.410130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.410580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.411077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.411084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.411520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.411806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.411815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.412285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.412782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.412789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.413002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.413483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.413491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.413742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.413954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.413961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.414500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.414960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.414966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.415211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.415457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.415464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.415878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.416386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.416393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.416874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.417365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.417372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.417831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.418321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.418328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.418799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.419257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.419264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.419643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.420084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.420092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.420584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.421085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.421091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.421546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.422067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.422076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.422609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.423131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.423140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.893 [2024-04-27 02:45:59.423844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.424478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.893 [2024-04-27 02:45:59.424506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.893 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.425003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.425603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.425631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.425883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.426144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.426151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.426633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.427121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.427127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.427661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.427946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.427955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.428447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.428951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.428959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.429288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.429822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.429828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.430318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.430650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.430657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.430883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.431238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.431244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.431794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.432252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.432258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.432794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.433019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.433025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.433640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.433810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.433822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.434187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.434433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.434440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.434875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.435375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.435382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.435872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.436131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.436138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.436394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.436839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.436845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.437201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.437667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.437674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 [2024-04-27 02:45:59.437900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.438380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.438386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 02:45:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.894 [2024-04-27 02:45:59.438856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 02:45:59 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.894 [2024-04-27 02:45:59.439357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.439365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.894 qpair failed and we were unable to recover it. 00:26:25.894 02:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.894 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.894 [2024-04-27 02:45:59.439712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.440216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.894 [2024-04-27 02:45:59.440224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.440709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.441200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.441211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.441583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.442031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.442038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.442514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.443024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.443030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.443621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.444113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.444122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.444547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.445035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.445044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.445473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.445959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.445968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.446540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.446800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.446808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.447320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.447799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.447805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.448290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.448843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.448849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.449195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.449716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.449723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.450212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.450641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.450647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.450974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.451467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.451474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.451954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.452393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.452400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.452889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.453397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.453403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.453597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.453682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.453693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.453960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.454427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.454433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.454919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.455150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.455156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 Malloc0 00:26:25.895 [2024-04-27 02:45:59.455428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.455651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.455664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 02:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.895 [2024-04-27 02:45:59.456197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 02:45:59 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:25.895 [2024-04-27 02:45:59.456552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.456559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 02:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.895 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.895 [2024-04-27 02:45:59.457012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.457483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.457489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.457700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.458203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.458209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.458675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.459010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.459017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.459391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.459856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.459862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.460343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.460693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.460699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.461178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.461430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.895 [2024-04-27 02:45:59.461437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.895 qpair failed and we were unable to recover it. 00:26:25.895 [2024-04-27 02:45:59.461904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.462020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.462029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.462422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.462747] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.896 [2024-04-27 02:45:59.462887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.462894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.463384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.463722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.463729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.464212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.464672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.464680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.465046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.465539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.465547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.466035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.466583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.466611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.466940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.467466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.467474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.467750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.468154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.468161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.468638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.468908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.468915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.469411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.469613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.469621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.469981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.470488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.470496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.470993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.471375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.471382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.471597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.471793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.471801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 02:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.896 [2024-04-27 02:45:59.472140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 02:45:59 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.896 [2024-04-27 02:45:59.472407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.472415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 02:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.896 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.896 [2024-04-27 02:45:59.472910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.473379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.473387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.473894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.474341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.474349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.474858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.475362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.475369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.475868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.476366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.476374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.476831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.477171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.477178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.477544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.477992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.477999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.478232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.478416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.478428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.478907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.479164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.479173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.479661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.480153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.480161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.480627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.480985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.480992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.481568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.482047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.482057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.482536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.483057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.896 [2024-04-27 02:45:59.483067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.896 qpair failed and we were unable to recover it. 00:26:25.896 [2024-04-27 02:45:59.483539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 02:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.897 [2024-04-27 02:45:59.484117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.484127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 02:45:59 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.897 02:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.897 [2024-04-27 02:45:59.484550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.897 [2024-04-27 02:45:59.484920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.484931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.485535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.485903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.485913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.486412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.486653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.486661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.487162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.487644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.487651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.488095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.488684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.488713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.488948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.489446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.489454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.489945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.490399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.490412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.490912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.491238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.491246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.491679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.492017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.492026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.492501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.493024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.493035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.493482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.493773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.493783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.494122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.494467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.494475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.494699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.495221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.495229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 [2024-04-27 02:45:59.495466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 02:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.897 [2024-04-27 02:45:59.495912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.495920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:25.897 02:45:59 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.897 [2024-04-27 02:45:59.496420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 02:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.897 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.897 [2024-04-27 02:45:59.496917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.897 [2024-04-27 02:45:59.496925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:25.897 qpair failed and we were unable to recover it. 00:26:26.160 [2024-04-27 02:45:59.497288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.160 [2024-04-27 02:45:59.497627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.160 [2024-04-27 02:45:59.497635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.160 qpair failed and we were unable to recover it. 00:26:26.160 [2024-04-27 02:45:59.497969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.160 [2024-04-27 02:45:59.498211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.160 [2024-04-27 02:45:59.498218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.160 qpair failed and we were unable to recover it. 00:26:26.160 [2024-04-27 02:45:59.498680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.160 [2024-04-27 02:45:59.499184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.499191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.499512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.500008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.500015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.500506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.501005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.501012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.501467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.501949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.501959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.502483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.503010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.161 [2024-04-27 02:45:59.503020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff40000b90 with addr=10.0.0.2, port=4420 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.503026] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.161 02:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.161 02:45:59 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:26.161 02:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.161 02:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.161 [2024-04-27 02:45:59.513578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.513678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.513699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.513706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.513711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.513728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 02:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.161 02:45:59 -- host/target_disconnect.sh@58 -- # wait 281390 00:26:26.161 [2024-04-27 02:45:59.523568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.523662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.523676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.523682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.523686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.523699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.533598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.533687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.533700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.533706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.533711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.533723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.543624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.543716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.543729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.543735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.543739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.543752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.553574] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.553677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.553691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.553696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.553701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.553713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.563617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.563699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.563712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.563718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.563726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.563738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.573643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.573742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.573755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.573761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.573766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.573778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.583631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.583724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.583738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.583744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.583748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.583760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.593665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.593756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.593769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.593775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.593780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.593791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.603673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.603760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.603773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.603779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.603783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.161 [2024-04-27 02:45:59.603795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.161 qpair failed and we were unable to recover it. 00:26:26.161 [2024-04-27 02:45:59.613657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.161 [2024-04-27 02:45:59.613785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.161 [2024-04-27 02:45:59.613798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.161 [2024-04-27 02:45:59.613804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.161 [2024-04-27 02:45:59.613808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.613820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.623763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.623859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.623879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.623886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.623891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.623907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.633818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.633916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.633936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.633944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.633948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.633964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.643900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.644011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.644026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.644032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.644037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.644050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.653840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.653927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.653947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.653957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.653962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.653977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.663875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.663964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.663984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.663991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.663996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.664012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.673801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.673895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.673915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.673922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.673927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.673942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.683936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.684025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.684046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.684053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.684058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.684074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.693948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.694037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.694056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.694063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.694068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.694084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.703960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.704050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.704070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.704076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.704081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.704097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.714073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.714166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.714186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.714193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.714198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.714213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.724085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.724168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.724182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.724189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.724194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.724206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.734055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.734140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.734154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.734160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.734164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.734176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.744078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.162 [2024-04-27 02:45:59.744165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.162 [2024-04-27 02:45:59.744182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.162 [2024-04-27 02:45:59.744188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.162 [2024-04-27 02:45:59.744193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.162 [2024-04-27 02:45:59.744205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.162 qpair failed and we were unable to recover it. 00:26:26.162 [2024-04-27 02:45:59.754093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.163 [2024-04-27 02:45:59.754183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.163 [2024-04-27 02:45:59.754197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.163 [2024-04-27 02:45:59.754203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.163 [2024-04-27 02:45:59.754207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.163 [2024-04-27 02:45:59.754219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.163 qpair failed and we were unable to recover it. 00:26:26.163 [2024-04-27 02:45:59.764137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.163 [2024-04-27 02:45:59.764226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.163 [2024-04-27 02:45:59.764239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.163 [2024-04-27 02:45:59.764245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.163 [2024-04-27 02:45:59.764249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.163 [2024-04-27 02:45:59.764261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.163 qpair failed and we were unable to recover it. 00:26:26.163 [2024-04-27 02:45:59.774081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.163 [2024-04-27 02:45:59.774163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.163 [2024-04-27 02:45:59.774176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.163 [2024-04-27 02:45:59.774182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.163 [2024-04-27 02:45:59.774186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.163 [2024-04-27 02:45:59.774198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.163 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.784270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.784374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.784388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.784394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.784399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.784411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.794295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.794386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.794400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.794406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.794411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.794423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.804403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.804495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.804509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.804514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.804519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.804531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.814365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.814463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.814477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.814482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.814487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.814498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.824313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.824405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.824418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.824424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.824429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.824441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.834295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.834383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.834399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.834405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.834410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.834422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.844395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.844484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.844498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.844504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.844509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.844521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.854415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.854514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.854528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.854534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.854538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.426 [2024-04-27 02:45:59.854550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.426 qpair failed and we were unable to recover it. 00:26:26.426 [2024-04-27 02:45:59.864418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.426 [2024-04-27 02:45:59.864502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.426 [2024-04-27 02:45:59.864515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.426 [2024-04-27 02:45:59.864521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.426 [2024-04-27 02:45:59.864526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.864537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.874471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.874559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.874572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.874578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.874583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.874599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.884388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.884467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.884480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.884486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.884491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.884502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.894452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.894535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.894548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.894555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.894559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.894571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.904524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.904615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.904629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.904634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.904639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.904651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.914580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.914677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.914690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.914696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.914700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.914712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.924674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.924772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.924788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.924794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.924798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.924810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.934613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.934717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.934731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.934737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.934741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.934753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.944662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.944746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.944759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.944766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.944771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.944782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.954674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.954774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.954787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.954793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.954797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.954809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.964725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.964810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.964823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.964829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.964837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.964848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.974749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.974839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.974853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.974859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.974863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.974875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.984800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.984886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.984900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.984906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.984911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.984922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:45:59.994792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.427 [2024-04-27 02:45:59.994892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.427 [2024-04-27 02:45:59.994913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.427 [2024-04-27 02:45:59.994920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.427 [2024-04-27 02:45:59.994925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.427 [2024-04-27 02:45:59.994941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.427 qpair failed and we were unable to recover it. 00:26:26.427 [2024-04-27 02:46:00.005265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.428 [2024-04-27 02:46:00.005385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.428 [2024-04-27 02:46:00.005405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.428 [2024-04-27 02:46:00.005412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.428 [2024-04-27 02:46:00.005417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.428 [2024-04-27 02:46:00.005433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.428 qpair failed and we were unable to recover it. 00:26:26.428 [2024-04-27 02:46:00.014760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.428 [2024-04-27 02:46:00.014855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.428 [2024-04-27 02:46:00.014870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.428 [2024-04-27 02:46:00.014876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.428 [2024-04-27 02:46:00.014881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.428 [2024-04-27 02:46:00.014894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.428 qpair failed and we were unable to recover it. 00:26:26.428 [2024-04-27 02:46:00.024785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.428 [2024-04-27 02:46:00.024873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.428 [2024-04-27 02:46:00.024887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.428 [2024-04-27 02:46:00.024893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.428 [2024-04-27 02:46:00.024898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.428 [2024-04-27 02:46:00.024910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.428 qpair failed and we were unable to recover it. 00:26:26.428 [2024-04-27 02:46:00.035101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.428 [2024-04-27 02:46:00.035203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.428 [2024-04-27 02:46:00.035224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.428 [2024-04-27 02:46:00.035230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.428 [2024-04-27 02:46:00.035236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.428 [2024-04-27 02:46:00.035250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.428 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.044947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.045081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.045096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.045102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.045107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.045120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.054956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.055045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.055059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.055068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.055073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.055086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.065036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.065123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.065136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.065142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.065147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.065158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.074903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.074991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.075005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.075011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.075015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.075027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.084929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.085017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.085031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.085037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.085041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.085053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.095056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.095141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.095154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.095159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.095164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.095175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.105091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.105175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.105188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.105193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.105198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.105210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.115150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.115242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.115255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.115262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.115266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.115283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.125188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.125276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.125292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.125298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.125303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.125315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.135207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.135482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.135497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.135503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.135507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.135520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.145225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.145318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.145333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.145341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.145346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.145358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.689 qpair failed and we were unable to recover it. 00:26:26.689 [2024-04-27 02:46:00.155228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.689 [2024-04-27 02:46:00.155323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.689 [2024-04-27 02:46:00.155336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.689 [2024-04-27 02:46:00.155343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.689 [2024-04-27 02:46:00.155347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.689 [2024-04-27 02:46:00.155360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.165251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.165337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.165350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.165356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.165361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.165373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.175313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.175400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.175413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.175419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.175424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.175436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.185334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.185417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.185431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.185436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.185441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.185453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.195345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.195434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.195447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.195453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.195458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.195470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.205393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.205478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.205491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.205497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.205501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.205513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.215424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.215508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.215521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.215526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.215531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.215543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.225457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.225571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.225584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.225590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.225595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.225607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.235484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.235576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.235593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.235598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.235603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.235615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.245559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.245673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.245685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.245690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.245695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.245706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.255526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.255610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.255623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.255629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.255634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.255645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.265535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.265621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.265634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.265640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.265645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.265657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.275606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.275696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.275709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.275716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.275720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.275735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.285645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.285728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.285742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.285748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.285753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.285765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.295622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.295720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.295733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.295738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.295743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.295755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.690 [2024-04-27 02:46:00.305680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.690 [2024-04-27 02:46:00.305766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.690 [2024-04-27 02:46:00.305779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.690 [2024-04-27 02:46:00.305785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.690 [2024-04-27 02:46:00.305790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.690 [2024-04-27 02:46:00.305802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.690 qpair failed and we were unable to recover it. 00:26:26.951 [2024-04-27 02:46:00.315697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.951 [2024-04-27 02:46:00.315833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.951 [2024-04-27 02:46:00.315846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.951 [2024-04-27 02:46:00.315852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.951 [2024-04-27 02:46:00.315857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.951 [2024-04-27 02:46:00.315868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.951 qpair failed and we were unable to recover it. 00:26:26.951 [2024-04-27 02:46:00.325845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.951 [2024-04-27 02:46:00.325929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.951 [2024-04-27 02:46:00.325946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.951 [2024-04-27 02:46:00.325951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.325956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.325968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.335746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.335844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.335865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.335872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.335877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.335893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.345792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.345882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.345903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.345909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.345914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.345930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.355828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.355920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.355940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.355947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.355952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.355967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.366007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.366099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.366120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.366126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.366135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.366151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.375930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.376016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.376031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.376037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.376041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.376054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.385792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.385878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.385892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.385898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.385903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.385915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.395824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.395955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.395969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.395974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.395979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.395991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.405946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.406045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.406059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.406065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.406069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.406081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.415940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.416039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.416059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.416066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.416070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.416086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.425920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.426008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.426028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.426035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.426040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.426055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.436088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.436201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.436221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.436228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.436233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.436248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.446013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.446105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.446119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.446125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.446130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.446143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.455961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.952 [2024-04-27 02:46:00.456052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.952 [2024-04-27 02:46:00.456065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.952 [2024-04-27 02:46:00.456071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.952 [2024-04-27 02:46:00.456079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.952 [2024-04-27 02:46:00.456092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.952 qpair failed and we were unable to recover it. 00:26:26.952 [2024-04-27 02:46:00.466119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.466209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.466229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.466236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.466241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.466257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.476155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.476243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.476257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.476263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.476268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.476285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.486175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.486261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.486275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.486286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.486291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.486303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.496178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.496262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.496275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.496286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.496290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.496303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.506232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.506329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.506343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.506348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.506353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.506365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.516232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.516324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.516337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.516343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.516348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.516359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.526287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.526370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.526383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.526388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.526393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.526405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.536322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.536409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.536422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.536428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.536432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.536444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.546358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.546445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.546458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.546467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.546471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.546483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.556297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.556388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.556400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.556406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.556411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.556423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:26.953 [2024-04-27 02:46:00.566396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.953 [2024-04-27 02:46:00.566498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.953 [2024-04-27 02:46:00.566511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.953 [2024-04-27 02:46:00.566517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.953 [2024-04-27 02:46:00.566522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:26.953 [2024-04-27 02:46:00.566534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.953 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.576421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.576510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.576523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.576529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.576534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.576547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.586463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.586577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.586591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.586597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.586602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.586614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.596456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.596543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.596556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.596563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.596568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.596579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.606493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.606576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.606589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.606595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.606600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.606611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.616688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.616779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.616792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.616798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.616802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.616814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.626674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.626760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.626773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.626778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.626783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.626795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.636615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.636702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.636721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.636727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.636731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.636744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.646673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.646780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.646793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.646798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.646803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.646814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.656632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.656716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.656729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.656735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.656740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.656751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.666645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.666732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.666746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.216 [2024-04-27 02:46:00.666752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.216 [2024-04-27 02:46:00.666757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.216 [2024-04-27 02:46:00.666768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.216 qpair failed and we were unable to recover it. 00:26:27.216 [2024-04-27 02:46:00.676661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.216 [2024-04-27 02:46:00.676751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.216 [2024-04-27 02:46:00.676764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.676770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.676775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.676790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.686742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.686825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.686839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.686845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.686850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.686862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.696743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.696835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.696854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.696861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.696866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.696882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.706823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.706943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.706964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.706971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.706975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.706991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.716846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.716939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.716953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.716959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.716964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.716977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.726894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.727009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.727033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.727040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.727045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.727060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.736867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.736957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.736976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.736983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.736989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.737004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.746902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.746990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.747011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.747018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.747022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.747038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.756915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.757048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.757063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.757068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.757073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.757085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.766947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.767074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.767088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.767094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.767102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.767114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.776978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.777061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.777074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.777079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.777084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.777096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.786871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.786985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.787000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.787006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.787013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.787026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.797041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.797130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.797144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.797150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.797156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.797170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.807043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.217 [2024-04-27 02:46:00.807124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.217 [2024-04-27 02:46:00.807137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.217 [2024-04-27 02:46:00.807145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.217 [2024-04-27 02:46:00.807152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.217 [2024-04-27 02:46:00.807164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.217 qpair failed and we were unable to recover it. 00:26:27.217 [2024-04-27 02:46:00.817051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.218 [2024-04-27 02:46:00.817138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.218 [2024-04-27 02:46:00.817151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.218 [2024-04-27 02:46:00.817157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.218 [2024-04-27 02:46:00.817162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.218 [2024-04-27 02:46:00.817174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.218 qpair failed and we were unable to recover it. 00:26:27.218 [2024-04-27 02:46:00.827023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.218 [2024-04-27 02:46:00.827119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.218 [2024-04-27 02:46:00.827133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.218 [2024-04-27 02:46:00.827138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.218 [2024-04-27 02:46:00.827143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.218 [2024-04-27 02:46:00.827154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.218 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.837104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.837192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.837206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.837212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.837217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.837229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.847186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.847301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.847315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.847322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.847327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.847339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.857183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.857275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.857291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.857297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.857305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.857317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.867120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.867207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.867220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.867226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.867230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.867242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.877225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.877318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.877331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.877337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.877342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.877353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.887290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.887380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.887393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.887399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.887404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.887415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.897296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.481 [2024-04-27 02:46:00.897398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.481 [2024-04-27 02:46:00.897412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.481 [2024-04-27 02:46:00.897418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.481 [2024-04-27 02:46:00.897423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.481 [2024-04-27 02:46:00.897435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.481 qpair failed and we were unable to recover it. 00:26:27.481 [2024-04-27 02:46:00.907337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.907428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.907441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.907447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.907452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.907463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.917373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.917467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.917480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.917486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.917491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.917503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.927379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.927465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.927478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.927484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.927489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.927502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.937394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.937476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.937489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.937495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.937499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.937512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.947433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.947518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.947531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.947540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.947545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.947557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.957466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.957558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.957571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.957577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.957582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.957593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.967525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.967614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.967627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.967633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.967638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.967649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.977433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.977513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.977526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.977532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.977537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.977548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.987586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.987670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.987683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.987689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.987693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.987705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:00.997607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:00.997698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:00.997711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:00.997717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:00.997722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:00.997733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:01.007522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:01.007612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:01.007626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:01.007631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:01.007636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:01.007647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:01.017617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:01.017702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:01.017715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:01.017721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:01.017726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:01.017737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:01.027547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:01.027631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:01.027644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:01.027649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.482 [2024-04-27 02:46:01.027654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.482 [2024-04-27 02:46:01.027666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.482 qpair failed and we were unable to recover it. 00:26:27.482 [2024-04-27 02:46:01.037681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.482 [2024-04-27 02:46:01.037772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.482 [2024-04-27 02:46:01.037789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.482 [2024-04-27 02:46:01.037795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.037800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.037811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.483 [2024-04-27 02:46:01.047706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.483 [2024-04-27 02:46:01.047791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.483 [2024-04-27 02:46:01.047804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.483 [2024-04-27 02:46:01.047810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.047814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.047826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.483 [2024-04-27 02:46:01.057764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.483 [2024-04-27 02:46:01.057856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.483 [2024-04-27 02:46:01.057869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.483 [2024-04-27 02:46:01.057875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.057880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.057894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.483 [2024-04-27 02:46:01.067782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.483 [2024-04-27 02:46:01.067873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.483 [2024-04-27 02:46:01.067894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.483 [2024-04-27 02:46:01.067901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.067906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.067921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.483 [2024-04-27 02:46:01.077784] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.483 [2024-04-27 02:46:01.077873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.483 [2024-04-27 02:46:01.077887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.483 [2024-04-27 02:46:01.077893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.077898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.077914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.483 [2024-04-27 02:46:01.087847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.483 [2024-04-27 02:46:01.087933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.483 [2024-04-27 02:46:01.087947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.483 [2024-04-27 02:46:01.087953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.087957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.087970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.483 [2024-04-27 02:46:01.097844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.483 [2024-04-27 02:46:01.097927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.483 [2024-04-27 02:46:01.097941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.483 [2024-04-27 02:46:01.097946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.483 [2024-04-27 02:46:01.097951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.483 [2024-04-27 02:46:01.097963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.483 qpair failed and we were unable to recover it. 00:26:27.745 [2024-04-27 02:46:01.107886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.107972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.107985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.107992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.107996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.108008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.117945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.118033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.118045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.118051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.118056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.118068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.127966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.128052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.128069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.128075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.128079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.128092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.137986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.138069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.138082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.138088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.138093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.138104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.147988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.148075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.148088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.148094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.148098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.148110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.158017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.158112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.158125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.158131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.158136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.158147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.168103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.168227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.168241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.168246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.168251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.168266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.178092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.178176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.178190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.178197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.178201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.178212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.188062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.188147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.188160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.188166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.188171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.188182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.198147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.198239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.198253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.198258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.198263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.198274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.208136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.208217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.208231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.208236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.208241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.208253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.218157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.218247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.218260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.218266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.218271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.218287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.746 [2024-04-27 02:46:01.228190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.746 [2024-04-27 02:46:01.228274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.746 [2024-04-27 02:46:01.228292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.746 [2024-04-27 02:46:01.228298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.746 [2024-04-27 02:46:01.228303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.746 [2024-04-27 02:46:01.228314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.746 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.238301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.238417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.238430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.238436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.238440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.238451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.248251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.248337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.248350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.248355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.248360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.248371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.258284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.258366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.258380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.258385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.258394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.258405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.268380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.268466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.268479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.268485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.268489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.268501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.278384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.278472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.278486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.278491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.278496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.278508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.288383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.288471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.288484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.288490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.288495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.288507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.298318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.298423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.298438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.298443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.298448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.298460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.308413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.308500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.308513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.308519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.308524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.308535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.318478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.318588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.318601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.318606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.318611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.318622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.328499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.328589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.328603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.328608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.328612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.328624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.338512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.338596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.338609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.338615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.338619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.338630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.348442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.348529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.348542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.348551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.348555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.348567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:27.747 [2024-04-27 02:46:01.358456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.747 [2024-04-27 02:46:01.358548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.747 [2024-04-27 02:46:01.358561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.747 [2024-04-27 02:46:01.358567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.747 [2024-04-27 02:46:01.358572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:27.747 [2024-04-27 02:46:01.358583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:27.747 qpair failed and we were unable to recover it. 00:26:28.010 [2024-04-27 02:46:01.368613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.010 [2024-04-27 02:46:01.368696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.010 [2024-04-27 02:46:01.368709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.010 [2024-04-27 02:46:01.368715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.010 [2024-04-27 02:46:01.368719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.010 [2024-04-27 02:46:01.368732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-04-27 02:46:01.378613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.010 [2024-04-27 02:46:01.378694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.010 [2024-04-27 02:46:01.378707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.010 [2024-04-27 02:46:01.378713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.010 [2024-04-27 02:46:01.378718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.010 [2024-04-27 02:46:01.378729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-04-27 02:46:01.388680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.010 [2024-04-27 02:46:01.388768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.010 [2024-04-27 02:46:01.388781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.010 [2024-04-27 02:46:01.388787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.010 [2024-04-27 02:46:01.388791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.010 [2024-04-27 02:46:01.388803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-04-27 02:46:01.398704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.010 [2024-04-27 02:46:01.398794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.010 [2024-04-27 02:46:01.398807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.010 [2024-04-27 02:46:01.398813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.010 [2024-04-27 02:46:01.398818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.398829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.408728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.408815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.408835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.408842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.408846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.408862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.418716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.418800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.418814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.418820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.418825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.418838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.428775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.428863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.428876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.428882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.428886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.428898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.438750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.438845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.438862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.438868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.438872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.438884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.448694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.448778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.448791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.448797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.448801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.448812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.458843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.458935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.458949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.458955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.458960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.458973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.468875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.468959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.468972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.468978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.468982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.468994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.478922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.479011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.479024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.479029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.479033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.479045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.488945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.489026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.489039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.489045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.489049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.489061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.499000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.499089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.499102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.499108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.499112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.499124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.509011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.509102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.509115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.509121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.509125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.509137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.519086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.519201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.519214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.519220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.519224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.519236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.529026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.011 [2024-04-27 02:46:01.529110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.011 [2024-04-27 02:46:01.529126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.011 [2024-04-27 02:46:01.529132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.011 [2024-04-27 02:46:01.529136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.011 [2024-04-27 02:46:01.529148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-04-27 02:46:01.539078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.539161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.539175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.539180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.539185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.539197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.549108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.549231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.549245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.549251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.549255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.549267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.559006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.559100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.559114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.559119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.559124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.559135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.569141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.569225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.569238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.569243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.569248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.569262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.579165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.579246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.579259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.579265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.579269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.579285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.589210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.589307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.589321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.589327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.589331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.589343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.599233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.599344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.599358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.599364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.599368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.599380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.609290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.609373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.609387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.609392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.609396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.609408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-04-27 02:46:01.619317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.012 [2024-04-27 02:46:01.619407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.012 [2024-04-27 02:46:01.619423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.012 [2024-04-27 02:46:01.619429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.012 [2024-04-27 02:46:01.619433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.012 [2024-04-27 02:46:01.619445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.276 [2024-04-27 02:46:01.629333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.276 [2024-04-27 02:46:01.629418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.629431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.629437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.629441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.629453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.639373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.639472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.639486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.639491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.639496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.639507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.649453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.649533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.649546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.649551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.649555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.649567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.659435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.659518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.659531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.659537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.659544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.659556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.669432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.669518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.669531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.669537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.669541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.669553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.679498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.679582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.679595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.679601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.679605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.679616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.689500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.689587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.689600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.689606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.689611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.689622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.699478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.699563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.699576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.699582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.699586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.699598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.709569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.709695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.709708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.709715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.709719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.709730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.719436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.719523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.719537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.719542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.719547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.719559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.729485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.729573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.729586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.729592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.729597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.729609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.739637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.739718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.739731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.739737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.739741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.739752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.749689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.749774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.749786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.749795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.277 [2024-04-27 02:46:01.749800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.277 [2024-04-27 02:46:01.749811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.277 qpair failed and we were unable to recover it. 00:26:28.277 [2024-04-27 02:46:01.759654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.277 [2024-04-27 02:46:01.759745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.277 [2024-04-27 02:46:01.759758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.277 [2024-04-27 02:46:01.759763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.759768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.759779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.769709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.769786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.769799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.769804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.769808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.769819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.779678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.779763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.779783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.779790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.779795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.779810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.789746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.789851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.789872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.789878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.789883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.789898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.799850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.799951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.799972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.799979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.799983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.799999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.809911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.809999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.810019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.810025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.810030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.810045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.819840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.819925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.819939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.819945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.819950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.819962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.829901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.829993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.830013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.830020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.830026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.830041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.839918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.840029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.840043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.840053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.840058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.840071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.849926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.850014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.850027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.850033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.850037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.850049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.859957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.860035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.860049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.860055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.860059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.860071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.870012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.870129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.870149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.870156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.870161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.870176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.880003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.880088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.880102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.880107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.880112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.880124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.278 [2024-04-27 02:46:01.890225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.278 [2024-04-27 02:46:01.890330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.278 [2024-04-27 02:46:01.890344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.278 [2024-04-27 02:46:01.890350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.278 [2024-04-27 02:46:01.890354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.278 [2024-04-27 02:46:01.890367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.278 qpair failed and we were unable to recover it. 00:26:28.541 [2024-04-27 02:46:01.900029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.541 [2024-04-27 02:46:01.900110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.541 [2024-04-27 02:46:01.900123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.541 [2024-04-27 02:46:01.900129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.541 [2024-04-27 02:46:01.900134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.541 [2024-04-27 02:46:01.900146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-04-27 02:46:01.910113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.541 [2024-04-27 02:46:01.910196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.541 [2024-04-27 02:46:01.910209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.541 [2024-04-27 02:46:01.910215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.541 [2024-04-27 02:46:01.910219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.541 [2024-04-27 02:46:01.910231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.541 qpair failed and we were unable to recover it. 00:26:28.541 [2024-04-27 02:46:01.920104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.541 [2024-04-27 02:46:01.920190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.541 [2024-04-27 02:46:01.920203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.920208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.920213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.920224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.930088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.930174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.930190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.930196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.930201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.930212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.940114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.940192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.940206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.940211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.940215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.940227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.950221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.950308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.950321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.950327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.950331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.950343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.960063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.960147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.960160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.960165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.960170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.960182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.970246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.970343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.970357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.970363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.970367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.970383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.980255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.980340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.980353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.980359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.980364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.980376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:01.990322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:01.990411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:01.990425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:01.990431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:01.990436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:01.990447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:02.000300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:02.000389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:02.000403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:02.000409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:02.000414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:02.000425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:02.010371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:02.010462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:02.010475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:02.010481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:02.010486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:02.010498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:02.020318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:02.020402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:02.020421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:02.020427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:02.020432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:02.020444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:02.030435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:02.030521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:02.030535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:02.030541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:02.030546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:02.030558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:02.040421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:02.040506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:02.040519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:02.040525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:02.040529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:02.040541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.542 qpair failed and we were unable to recover it. 00:26:28.542 [2024-04-27 02:46:02.050521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.542 [2024-04-27 02:46:02.050626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.542 [2024-04-27 02:46:02.050639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.542 [2024-04-27 02:46:02.050645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.542 [2024-04-27 02:46:02.050650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.542 [2024-04-27 02:46:02.050661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.060459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.060544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.060557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.060563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.060570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.060583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.070634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.070718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.070732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.070738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.070743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.070754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.080540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.080623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.080636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.080642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.080647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.080659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.090599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.090683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.090696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.090702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.090707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.090718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.100579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.100660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.100673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.100679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.100684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.100695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.110633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.110724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.110737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.110743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.110748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.110759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.120584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.120666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.120679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.120685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.120689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.120701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.130681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.130761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.130775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.130780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.130785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.130797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.140695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.140775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.140788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.140794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.140799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.140810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.543 [2024-04-27 02:46:02.150758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.543 [2024-04-27 02:46:02.150846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.543 [2024-04-27 02:46:02.150859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.543 [2024-04-27 02:46:02.150865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.543 [2024-04-27 02:46:02.150873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.543 [2024-04-27 02:46:02.150885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.543 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.160724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.160860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.160873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.806 [2024-04-27 02:46:02.160879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.806 [2024-04-27 02:46:02.160883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.806 [2024-04-27 02:46:02.160895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.806 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.170624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.170705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.170719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.806 [2024-04-27 02:46:02.170724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.806 [2024-04-27 02:46:02.170730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.806 [2024-04-27 02:46:02.170741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.806 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.180674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.180758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.180771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.806 [2024-04-27 02:46:02.180777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.806 [2024-04-27 02:46:02.180782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.806 [2024-04-27 02:46:02.180794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.806 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.190852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.190937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.190950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.806 [2024-04-27 02:46:02.190955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.806 [2024-04-27 02:46:02.190961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.806 [2024-04-27 02:46:02.190972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.806 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.200853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.200941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.200962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.806 [2024-04-27 02:46:02.200969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.806 [2024-04-27 02:46:02.200974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.806 [2024-04-27 02:46:02.200989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.806 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.210869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.210959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.210980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.806 [2024-04-27 02:46:02.210987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.806 [2024-04-27 02:46:02.210991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.806 [2024-04-27 02:46:02.211007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.806 qpair failed and we were unable to recover it. 00:26:28.806 [2024-04-27 02:46:02.220908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.806 [2024-04-27 02:46:02.220995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.806 [2024-04-27 02:46:02.221016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.221022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.221027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.221043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.230954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.231046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.231067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.231074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.231078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.231094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.240965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.241057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.241077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.241097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.241103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.241118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.250990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.251081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.251101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.251108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.251113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.251128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.261013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.261096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.261117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.261123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.261128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.261144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.271078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.271172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.271193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.271200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.271204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.271221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.281021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.281104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.281118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.281124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.281130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.281143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.291072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.291155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.291169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.291176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.291181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.291192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.300984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.301067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.301080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.301086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.301091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.301103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.311183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.311269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.311287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.311294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.311298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.311310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.321134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.321215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.321228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.321233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.321238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.321250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.331366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.331448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.331465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.331471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.331475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.331488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.341199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.807 [2024-04-27 02:46:02.341288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.807 [2024-04-27 02:46:02.341301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.807 [2024-04-27 02:46:02.341307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.807 [2024-04-27 02:46:02.341312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.807 [2024-04-27 02:46:02.341324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.807 qpair failed and we were unable to recover it. 00:26:28.807 [2024-04-27 02:46:02.351154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.351239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.351253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.351259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.351263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.351283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.361271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.361356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.361369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.361375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.361379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.361391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.371261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.371343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.371356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.371363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.371368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.371383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.381332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.381410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.381424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.381429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.381434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.381445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.391364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.391443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.391456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.391462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.391466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.391478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.401373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.401455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.401468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.401474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.401479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.401490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.411420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.411498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.411512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.411518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.411523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.411535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:28.808 [2024-04-27 02:46:02.421411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.808 [2024-04-27 02:46:02.421490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.808 [2024-04-27 02:46:02.421505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.808 [2024-04-27 02:46:02.421510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.808 [2024-04-27 02:46:02.421515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:28.808 [2024-04-27 02:46:02.421527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:28.808 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.431466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.071 [2024-04-27 02:46:02.431544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.071 [2024-04-27 02:46:02.431557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.071 [2024-04-27 02:46:02.431563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.071 [2024-04-27 02:46:02.431567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.071 [2024-04-27 02:46:02.431579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.071 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.441483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.071 [2024-04-27 02:46:02.441570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.071 [2024-04-27 02:46:02.441583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.071 [2024-04-27 02:46:02.441589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.071 [2024-04-27 02:46:02.441594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.071 [2024-04-27 02:46:02.441606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.071 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.451399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.071 [2024-04-27 02:46:02.451476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.071 [2024-04-27 02:46:02.451489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.071 [2024-04-27 02:46:02.451495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.071 [2024-04-27 02:46:02.451499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.071 [2024-04-27 02:46:02.451510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.071 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.461541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.071 [2024-04-27 02:46:02.461770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.071 [2024-04-27 02:46:02.461786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.071 [2024-04-27 02:46:02.461791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.071 [2024-04-27 02:46:02.461798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.071 [2024-04-27 02:46:02.461809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.071 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.471580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.071 [2024-04-27 02:46:02.471661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.071 [2024-04-27 02:46:02.471674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.071 [2024-04-27 02:46:02.471680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.071 [2024-04-27 02:46:02.471685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.071 [2024-04-27 02:46:02.471696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.071 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.481602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.071 [2024-04-27 02:46:02.481684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.071 [2024-04-27 02:46:02.481697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.071 [2024-04-27 02:46:02.481703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.071 [2024-04-27 02:46:02.481708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.071 [2024-04-27 02:46:02.481719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.071 qpair failed and we were unable to recover it. 00:26:29.071 [2024-04-27 02:46:02.491605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.491684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.491697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.491702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.491707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.491719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.501663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.501739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.501752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.501758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.501763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.501775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.511674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.511756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.511769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.511775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.511780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.511792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.521687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.521772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.521784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.521790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.521794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.521806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.531747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.531845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.531866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.531872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.531877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.531892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.541723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.541818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.541838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.541846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.541851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.541867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.551758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.551839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.551855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.551861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.551870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.551883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.561830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.561912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.561926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.561932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.561937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.561950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.571853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.571930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.571944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.571949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.571954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.571966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.581731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.581807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.581820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.581826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.581831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.581843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.591903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.591982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.591995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.592001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.592005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.592016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.601837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.601929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.601942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.601948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.601952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.601964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.611965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.612040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.612053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.612059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.612063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.072 [2024-04-27 02:46:02.612075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.072 qpair failed and we were unable to recover it. 00:26:29.072 [2024-04-27 02:46:02.621967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.072 [2024-04-27 02:46:02.622044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.072 [2024-04-27 02:46:02.622057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.072 [2024-04-27 02:46:02.622063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.072 [2024-04-27 02:46:02.622068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.622080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.073 [2024-04-27 02:46:02.632000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.073 [2024-04-27 02:46:02.632084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.073 [2024-04-27 02:46:02.632097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.073 [2024-04-27 02:46:02.632104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.073 [2024-04-27 02:46:02.632108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.632120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.073 [2024-04-27 02:46:02.642038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.073 [2024-04-27 02:46:02.642119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.073 [2024-04-27 02:46:02.642132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.073 [2024-04-27 02:46:02.642141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.073 [2024-04-27 02:46:02.642145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.642157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.073 [2024-04-27 02:46:02.652059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.073 [2024-04-27 02:46:02.652145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.073 [2024-04-27 02:46:02.652158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.073 [2024-04-27 02:46:02.652164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.073 [2024-04-27 02:46:02.652169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.652181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.073 [2024-04-27 02:46:02.662072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.073 [2024-04-27 02:46:02.662149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.073 [2024-04-27 02:46:02.662163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.073 [2024-04-27 02:46:02.662168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.073 [2024-04-27 02:46:02.662173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.662185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.073 [2024-04-27 02:46:02.671982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.073 [2024-04-27 02:46:02.672060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.073 [2024-04-27 02:46:02.672073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.073 [2024-04-27 02:46:02.672079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.073 [2024-04-27 02:46:02.672084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.672096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.073 [2024-04-27 02:46:02.682131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.073 [2024-04-27 02:46:02.682209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.073 [2024-04-27 02:46:02.682222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.073 [2024-04-27 02:46:02.682228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.073 [2024-04-27 02:46:02.682232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.073 [2024-04-27 02:46:02.682245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.073 qpair failed and we were unable to recover it. 00:26:29.336 [2024-04-27 02:46:02.692151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.336 [2024-04-27 02:46:02.692231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.336 [2024-04-27 02:46:02.692245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.336 [2024-04-27 02:46:02.692250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.336 [2024-04-27 02:46:02.692255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.336 [2024-04-27 02:46:02.692267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.336 qpair failed and we were unable to recover it. 00:26:29.336 [2024-04-27 02:46:02.702170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.336 [2024-04-27 02:46:02.702255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.336 [2024-04-27 02:46:02.702269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.336 [2024-04-27 02:46:02.702275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.336 [2024-04-27 02:46:02.702288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.336 [2024-04-27 02:46:02.702301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.336 qpair failed and we were unable to recover it. 00:26:29.336 [2024-04-27 02:46:02.712231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.336 [2024-04-27 02:46:02.712313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.336 [2024-04-27 02:46:02.712327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.336 [2024-04-27 02:46:02.712332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.336 [2024-04-27 02:46:02.712337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.336 [2024-04-27 02:46:02.712349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.336 qpair failed and we were unable to recover it. 00:26:29.336 [2024-04-27 02:46:02.722259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.336 [2024-04-27 02:46:02.722353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.336 [2024-04-27 02:46:02.722367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.336 [2024-04-27 02:46:02.722373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.336 [2024-04-27 02:46:02.722378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.336 [2024-04-27 02:46:02.722390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.336 qpair failed and we were unable to recover it. 00:26:29.336 [2024-04-27 02:46:02.732249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.336 [2024-04-27 02:46:02.732367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.336 [2024-04-27 02:46:02.732384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.336 [2024-04-27 02:46:02.732390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.336 [2024-04-27 02:46:02.732394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.336 [2024-04-27 02:46:02.732406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.336 qpair failed and we were unable to recover it. 00:26:29.336 [2024-04-27 02:46:02.742322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.336 [2024-04-27 02:46:02.742406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.336 [2024-04-27 02:46:02.742419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.742424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.742429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.742440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.752352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.752435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.752448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.752454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.752458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.752470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.762325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.762408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.762421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.762427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.762431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.762443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.772367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.772444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.772458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.772464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.772468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.772487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.782400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.782477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.782490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.782496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.782501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.782513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.792414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.792493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.792507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.792512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.792517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.792529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.802450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.802532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.802545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.802551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.802556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.802567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.812522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.812599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.812612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.812618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.812623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.812635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.822555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.822643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.822659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.822665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.822669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.822682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.832525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.832604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.832617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.832623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.832628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.832640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.842584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.842672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.842684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.842689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.842695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.842706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.852614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.852696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.852710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.852715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.852720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.852732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.862505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.862586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.862599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.862605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.862609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.337 [2024-04-27 02:46:02.862624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.337 qpair failed and we were unable to recover it. 00:26:29.337 [2024-04-27 02:46:02.872522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.337 [2024-04-27 02:46:02.872604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.337 [2024-04-27 02:46:02.872618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.337 [2024-04-27 02:46:02.872623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.337 [2024-04-27 02:46:02.872628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.872640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.882695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.882778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.882792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.882797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.882802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.882814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.892720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.892806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.892820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.892826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.892830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.892842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.902781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.902902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.902916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.902921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.902926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.902938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.912721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.912829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.912843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.912849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.912853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.912865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.922813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.922897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.922910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.922916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.922921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.922932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.932769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.932843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.932856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.932862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.932867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.932879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.942826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.942936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.942950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.942955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.942959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.942971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.338 [2024-04-27 02:46:02.952878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.338 [2024-04-27 02:46:02.952961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.338 [2024-04-27 02:46:02.952974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.338 [2024-04-27 02:46:02.952980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.338 [2024-04-27 02:46:02.952988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.338 [2024-04-27 02:46:02.952999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.338 qpair failed and we were unable to recover it. 00:26:29.599 [2024-04-27 02:46:02.962946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.599 [2024-04-27 02:46:02.963029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.599 [2024-04-27 02:46:02.963042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.599 [2024-04-27 02:46:02.963047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.599 [2024-04-27 02:46:02.963052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.599 [2024-04-27 02:46:02.963064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.599 qpair failed and we were unable to recover it. 00:26:29.599 [2024-04-27 02:46:02.972903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.599 [2024-04-27 02:46:02.972990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.599 [2024-04-27 02:46:02.973010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.599 [2024-04-27 02:46:02.973017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.599 [2024-04-27 02:46:02.973022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.599 [2024-04-27 02:46:02.973037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.599 qpair failed and we were unable to recover it. 00:26:29.599 [2024-04-27 02:46:02.982958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.599 [2024-04-27 02:46:02.983037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.599 [2024-04-27 02:46:02.983057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.599 [2024-04-27 02:46:02.983064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.599 [2024-04-27 02:46:02.983068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.599 [2024-04-27 02:46:02.983084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.599 qpair failed and we were unable to recover it. 00:26:29.599 [2024-04-27 02:46:02.992950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.599 [2024-04-27 02:46:02.993053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.599 [2024-04-27 02:46:02.993068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.599 [2024-04-27 02:46:02.993075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.599 [2024-04-27 02:46:02.993079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.599 [2024-04-27 02:46:02.993092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.599 qpair failed and we were unable to recover it. 00:26:29.599 [2024-04-27 02:46:03.002992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.599 [2024-04-27 02:46:03.003073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.599 [2024-04-27 02:46:03.003086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.599 [2024-04-27 02:46:03.003092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.599 [2024-04-27 02:46:03.003097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.599 [2024-04-27 02:46:03.003109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.599 qpair failed and we were unable to recover it. 00:26:29.599 [2024-04-27 02:46:03.013045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.599 [2024-04-27 02:46:03.013122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.599 [2024-04-27 02:46:03.013136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.599 [2024-04-27 02:46:03.013141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.599 [2024-04-27 02:46:03.013146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.599 [2024-04-27 02:46:03.013158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.599 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.023026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.023104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.023117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.023123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.023128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.023139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.033000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.033075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.033088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.033093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.033098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.033110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.043083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.043168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.043181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.043190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.043195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.043207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.053122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.053198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.053210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.053216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.053221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.053233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.063180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.063258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.063271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.063282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.063287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.063299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.073203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.073285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.073299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.073304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.073309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.073322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.083220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.083302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.083315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.083321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.083326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.083338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.093220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.093299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.093313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.093318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.093323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.093335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.103246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.103329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.103343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.103349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.103354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.103366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.113339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.113458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.113472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.113478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.113482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.113494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.123348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.123432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.123445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.123451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.123456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.123468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.133376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.133449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.133465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.133471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.133476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.133488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.143371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.143450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.143463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.143469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.143474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.143486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.153426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.153504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.153517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.153523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.153528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.153540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.163442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.163527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.163540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.163546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.163551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.163563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.173351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.173430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.173443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.173449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.173454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.173465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.183482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.183562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.183575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.183581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.183586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.183597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.193459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.193537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.193550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.193556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.193561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.193573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.203553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.203650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.203664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.203669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.203674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.203686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.600 [2024-04-27 02:46:03.213552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.600 [2024-04-27 02:46:03.213637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.600 [2024-04-27 02:46:03.213650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.600 [2024-04-27 02:46:03.213656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.600 [2024-04-27 02:46:03.213661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.600 [2024-04-27 02:46:03.213672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.600 qpair failed and we were unable to recover it. 00:26:29.861 [2024-04-27 02:46:03.223585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.861 [2024-04-27 02:46:03.223666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.861 [2024-04-27 02:46:03.223682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.861 [2024-04-27 02:46:03.223687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.861 [2024-04-27 02:46:03.223692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.861 [2024-04-27 02:46:03.223704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.861 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.233520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.233598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.233611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.233616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.233621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.233632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.243637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.243721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.243731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.243736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.243741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.243752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.253697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.253778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.253791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.253798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.253802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.253814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.263718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.263859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.263872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.263878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.263882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.263896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.273735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.273817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.273837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.273844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.273848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.273863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.283788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.283875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.283890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.283896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.283901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.283913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.293803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.293884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.293898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.293903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.293908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.293920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.303810] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.303892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.303906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.303911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.303916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.303928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.313817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.313895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.313912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.313918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.313922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.313934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.323848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.323931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.323944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.323949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.323954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.323966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.333869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.333943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.333956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.333962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.333966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.333978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.343890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.344009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.344023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.344029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.344033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.344045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.353947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.354025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.862 [2024-04-27 02:46:03.354038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.862 [2024-04-27 02:46:03.354043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.862 [2024-04-27 02:46:03.354051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.862 [2024-04-27 02:46:03.354063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.862 qpair failed and we were unable to recover it. 00:26:29.862 [2024-04-27 02:46:03.363988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.862 [2024-04-27 02:46:03.364085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.364105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.364112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.364117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.364132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.374000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.374077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.374091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.374097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.374102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.374115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.384063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.384141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.384161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.384168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.384174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.384189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.394070] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.394149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.394163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.394169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.394174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.394186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.404095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.404179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.404193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.404198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.404203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.404215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.414105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.414178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.414192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.414197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.414202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.414214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.424150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.424229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.424243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.424249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.424254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.424265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.434178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.434259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.434272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.434283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.434287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.434300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.444249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.444371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.444385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.444395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.444399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.444411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.454214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.454297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.454310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.454316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.454321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.454332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.464125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.464200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.464214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.464219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.464224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.464235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:29.863 [2024-04-27 02:46:03.474292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.863 [2024-04-27 02:46:03.474521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.863 [2024-04-27 02:46:03.474534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.863 [2024-04-27 02:46:03.474539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.863 [2024-04-27 02:46:03.474544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:29.863 [2024-04-27 02:46:03.474555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.863 qpair failed and we were unable to recover it. 00:26:30.125 [2024-04-27 02:46:03.484318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.125 [2024-04-27 02:46:03.484411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.484424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.484430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.484435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.484446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.494332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.494547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.494561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.494566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.494571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.494583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.504361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.504443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.504457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.504463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.504467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.504479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.514395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.514473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.514486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.514492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.514497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.514508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.524383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.524465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.524478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.524484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.524489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.524501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.534476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.534552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.534566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.534574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.534580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.534591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.544488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.544567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.544577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.544582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.544588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.544598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.554514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.554599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.554612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.554617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.554622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.554634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.564521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.564604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.564617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.564622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.564627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.564638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.574480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.574558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.574571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.574576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.574581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.574593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.584586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.584669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.584682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.584688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.584693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.584703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.594525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.126 [2024-04-27 02:46:03.594604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.126 [2024-04-27 02:46:03.594617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.126 [2024-04-27 02:46:03.594623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.126 [2024-04-27 02:46:03.594627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.126 [2024-04-27 02:46:03.594638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.126 qpair failed and we were unable to recover it. 00:26:30.126 [2024-04-27 02:46:03.604597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.604682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.604695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.604701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.604706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.604717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.614512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.614591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.614604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.614610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.614615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.614627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.624672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.624749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.624764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.624770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.624775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.624787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.634624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.634702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.634715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.634720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.634725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.634736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.644742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.644824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.644836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.644842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.644847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.644858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.654751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.654831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.654844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.654850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.654854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.654865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.664676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.664758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.664772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.664777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.664782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.664797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.674808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.674888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.674901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.674907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.674911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.674923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.684848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.684933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.684946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.684951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.684956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.684967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.694876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.695001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.695022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.695028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.695033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.695049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.127 [2024-04-27 02:46:03.704865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.127 [2024-04-27 02:46:03.704946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.127 [2024-04-27 02:46:03.704965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.127 [2024-04-27 02:46:03.704972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.127 [2024-04-27 02:46:03.704977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.127 [2024-04-27 02:46:03.704992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.127 qpair failed and we were unable to recover it. 00:26:30.128 [2024-04-27 02:46:03.714897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.128 [2024-04-27 02:46:03.714982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.128 [2024-04-27 02:46:03.715005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.128 [2024-04-27 02:46:03.715012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.128 [2024-04-27 02:46:03.715016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.128 [2024-04-27 02:46:03.715031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-04-27 02:46:03.724953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.128 [2024-04-27 02:46:03.725035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.128 [2024-04-27 02:46:03.725049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.128 [2024-04-27 02:46:03.725054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.128 [2024-04-27 02:46:03.725059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.128 [2024-04-27 02:46:03.725073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.128 [2024-04-27 02:46:03.734980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.128 [2024-04-27 02:46:03.735063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.128 [2024-04-27 02:46:03.735082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.128 [2024-04-27 02:46:03.735089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.128 [2024-04-27 02:46:03.735094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.128 [2024-04-27 02:46:03.735109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.128 qpair failed and we were unable to recover it. 00:26:30.391 [2024-04-27 02:46:03.744987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.391 [2024-04-27 02:46:03.745067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.391 [2024-04-27 02:46:03.745087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.391 [2024-04-27 02:46:03.745095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.391 [2024-04-27 02:46:03.745100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.391 [2024-04-27 02:46:03.745116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.391 qpair failed and we were unable to recover it. 00:26:30.391 [2024-04-27 02:46:03.754899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.391 [2024-04-27 02:46:03.754982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.391 [2024-04-27 02:46:03.755002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.391 [2024-04-27 02:46:03.755009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.391 [2024-04-27 02:46:03.755018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.391 [2024-04-27 02:46:03.755033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.391 qpair failed and we were unable to recover it. 00:26:30.391 [2024-04-27 02:46:03.765028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.391 [2024-04-27 02:46:03.765115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.391 [2024-04-27 02:46:03.765135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.391 [2024-04-27 02:46:03.765142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.391 [2024-04-27 02:46:03.765147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.391 [2024-04-27 02:46:03.765162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.391 qpair failed and we were unable to recover it. 00:26:30.391 [2024-04-27 02:46:03.774965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.391 [2024-04-27 02:46:03.775046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.391 [2024-04-27 02:46:03.775060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.391 [2024-04-27 02:46:03.775067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.391 [2024-04-27 02:46:03.775071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.775084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.785110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.785232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.785246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.785252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.785256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.785268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.795133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.795214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.795227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.795233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.795238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.795250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.805159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.805249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.805262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.805268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.805273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.805288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.815191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.815267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.815284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.815290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.815294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.815306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.825146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.825223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.825236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.825242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.825246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.825257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.835247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.835328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.835342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.835347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.835352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.835364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.845274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.845362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.845376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.845382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.845390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.845402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.855252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.855372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.855386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.855391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.855396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.855408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.865272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.865358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.865372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.865378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.865383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.865395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.875329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.875421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.875435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.875441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.392 [2024-04-27 02:46:03.875445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.392 [2024-04-27 02:46:03.875457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.392 qpair failed and we were unable to recover it. 00:26:30.392 [2024-04-27 02:46:03.885360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.392 [2024-04-27 02:46:03.885438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.392 [2024-04-27 02:46:03.885451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.392 [2024-04-27 02:46:03.885457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.885461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.885473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.895396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.895473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.895486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.895492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.895497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.895510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.905411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.905490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.905503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.905509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.905515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.905526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.915449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.915529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.915543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.915548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.915553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.915565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.925504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.925587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.925600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.925606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.925611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.925623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.935512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.935594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.935607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.935616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.935621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.935633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.945536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.945613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.945626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.945631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.945636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.945648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.955524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.955601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.955614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.955619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.955624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.955636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.965581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.965660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.965674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.965679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.965684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.965695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.975604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.975680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.975693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.975699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.975704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.975716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.985637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.393 [2024-04-27 02:46:03.985753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.393 [2024-04-27 02:46:03.985766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.393 [2024-04-27 02:46:03.985772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.393 [2024-04-27 02:46:03.985776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.393 [2024-04-27 02:46:03.985788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.393 qpair failed and we were unable to recover it. 00:26:30.393 [2024-04-27 02:46:03.995666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.394 [2024-04-27 02:46:03.995795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.394 [2024-04-27 02:46:03.995808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.394 [2024-04-27 02:46:03.995814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.394 [2024-04-27 02:46:03.995819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.394 [2024-04-27 02:46:03.995830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.394 qpair failed and we were unable to recover it. 00:26:30.394 [2024-04-27 02:46:04.005687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.394 [2024-04-27 02:46:04.005774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.394 [2024-04-27 02:46:04.005794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.394 [2024-04-27 02:46:04.005801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.394 [2024-04-27 02:46:04.005806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.394 [2024-04-27 02:46:04.005821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.394 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.015712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.015800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.015820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.015827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.015833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.015849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.025715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.025795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.025819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.025826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.025831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.025847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.035822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.035935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.035950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.035956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.035961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.035973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.045729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.045813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.045826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.045832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.045837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.045849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.055818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.055894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.055908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.055913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.055918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.055929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.065824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.065902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.065916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.065921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.065926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.065940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.075853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.075969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.075989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.075996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.076001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.076015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.085872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.085962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.085982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.085989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.085994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.086009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.095971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.657 [2024-04-27 02:46:04.096055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.657 [2024-04-27 02:46:04.096075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.657 [2024-04-27 02:46:04.096082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.657 [2024-04-27 02:46:04.096087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.657 [2024-04-27 02:46:04.096102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.657 qpair failed and we were unable to recover it. 00:26:30.657 [2024-04-27 02:46:04.105854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.105931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.105946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.105952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.105957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.105970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.115994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.116072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.116090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.116097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.116101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.116114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.126028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.126115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.126128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.126134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.126139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.126150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.136055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.136132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.136145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.136151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.136156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.136167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.146081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.146156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.146169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.146175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.146180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.146191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.156101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.156180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.156193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.156199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.156207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.156219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.166129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.166206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.166219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.166225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.166230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.166242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.176118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.176198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.176211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.176217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.176222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.176234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.186176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.186251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.186264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.186270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.186274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.186292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.196239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.196323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.196336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.196342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.196348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.196359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.206235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.206323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.206336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.206342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.206348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.206361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.216264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.216344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.216358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.216363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.216368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.216380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.226246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.226326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.226339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.226345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.658 [2024-04-27 02:46:04.226350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.658 [2024-04-27 02:46:04.226362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.658 qpair failed and we were unable to recover it. 00:26:30.658 [2024-04-27 02:46:04.236311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.658 [2024-04-27 02:46:04.236390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.658 [2024-04-27 02:46:04.236403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.658 [2024-04-27 02:46:04.236409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.659 [2024-04-27 02:46:04.236413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.659 [2024-04-27 02:46:04.236425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.659 qpair failed and we were unable to recover it. 00:26:30.659 [2024-04-27 02:46:04.246348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.659 [2024-04-27 02:46:04.246432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.659 [2024-04-27 02:46:04.246445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.659 [2024-04-27 02:46:04.246450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.659 [2024-04-27 02:46:04.246460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.659 [2024-04-27 02:46:04.246472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.659 qpair failed and we were unable to recover it. 00:26:30.659 [2024-04-27 02:46:04.256381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.659 [2024-04-27 02:46:04.256501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.659 [2024-04-27 02:46:04.256515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.659 [2024-04-27 02:46:04.256521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.659 [2024-04-27 02:46:04.256526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.659 [2024-04-27 02:46:04.256537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.659 qpair failed and we were unable to recover it. 00:26:30.659 [2024-04-27 02:46:04.266398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.659 [2024-04-27 02:46:04.266493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.659 [2024-04-27 02:46:04.266506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.659 [2024-04-27 02:46:04.266511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.659 [2024-04-27 02:46:04.266516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.659 [2024-04-27 02:46:04.266529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.659 qpair failed and we were unable to recover it. 00:26:30.921 [2024-04-27 02:46:04.276411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.921 [2024-04-27 02:46:04.276493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.921 [2024-04-27 02:46:04.276506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.921 [2024-04-27 02:46:04.276512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.921 [2024-04-27 02:46:04.276517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.921 [2024-04-27 02:46:04.276529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.921 qpair failed and we were unable to recover it. 00:26:30.921 [2024-04-27 02:46:04.286443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.921 [2024-04-27 02:46:04.286526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.921 [2024-04-27 02:46:04.286539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.921 [2024-04-27 02:46:04.286545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.921 [2024-04-27 02:46:04.286551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.921 [2024-04-27 02:46:04.286562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.921 qpair failed and we were unable to recover it. 00:26:30.921 [2024-04-27 02:46:04.296471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.921 [2024-04-27 02:46:04.296557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.921 [2024-04-27 02:46:04.296571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.921 [2024-04-27 02:46:04.296577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.921 [2024-04-27 02:46:04.296582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.921 [2024-04-27 02:46:04.296593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.921 qpair failed and we were unable to recover it. 00:26:30.921 [2024-04-27 02:46:04.306480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.921 [2024-04-27 02:46:04.306560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.921 [2024-04-27 02:46:04.306573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.921 [2024-04-27 02:46:04.306578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.921 [2024-04-27 02:46:04.306583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.921 [2024-04-27 02:46:04.306595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.921 qpair failed and we were unable to recover it. 00:26:30.921 [2024-04-27 02:46:04.316530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.921 [2024-04-27 02:46:04.316761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.921 [2024-04-27 02:46:04.316777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.921 [2024-04-27 02:46:04.316782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.921 [2024-04-27 02:46:04.316787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.921 [2024-04-27 02:46:04.316799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.921 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.326571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.326653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.326666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.326672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.326676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.326688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.336593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.336680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.336694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.336703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.336708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.336721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.346587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.346663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.346677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.346683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.346689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.346701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.356625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.356702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.356715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.356721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.356725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.356736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.366629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.366716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.366729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.366734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.366739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.366751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.376691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.376766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.376779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.376784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.376789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.376801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.386740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.386861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.386875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.386881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.386885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.386897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.396726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.396808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.396828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.396835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.396840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.396855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.406779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.406867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.406888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.406895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.406900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.406915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.416834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.416922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.416943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.416949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.416954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.416970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.426830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.426912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.426935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.426942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.426947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.426963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.436751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.436834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.436854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.436861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.436866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.436882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.446893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.446984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.447004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.922 [2024-04-27 02:46:04.447011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.922 [2024-04-27 02:46:04.447016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.922 [2024-04-27 02:46:04.447032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.922 qpair failed and we were unable to recover it. 00:26:30.922 [2024-04-27 02:46:04.456903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.922 [2024-04-27 02:46:04.457033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.922 [2024-04-27 02:46:04.457053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.457059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.457065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.457080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.466902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.466980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.466994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.467000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.467005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.467021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.477000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.477080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.477094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.477099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.477105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.477117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.486990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.487074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.487087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.487093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.487098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.487109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.496887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.496975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.496994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.497001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.497006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.497021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.507022] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.507110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.507130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.507136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.507141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.507156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.517039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.517119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.517137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.517143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.517148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.517160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.527115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.527204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.527217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.527223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.527227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.527239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:30.923 [2024-04-27 02:46:04.537089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.923 [2024-04-27 02:46:04.537169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.923 [2024-04-27 02:46:04.537182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.923 [2024-04-27 02:46:04.537187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.923 [2024-04-27 02:46:04.537192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:30.923 [2024-04-27 02:46:04.537204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.923 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.547175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.547264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.547280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.547286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.547291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.547303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.557233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.557345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.557358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.557364] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.557368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.557383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.567223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.567308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.567322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.567327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.567332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.567342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.577095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.577171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.577184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.577190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.577194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.577206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.587254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.587353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.587367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.587372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.587378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.587389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.597147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.597223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.597237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.597242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.597247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.597258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.607310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.607397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.607410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.607417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.607421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.607433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.617280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.617407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.617420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.617426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.617430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.617443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.627341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.627422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.627435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.627441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.627445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.627457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.637399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.637477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.637490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.637495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.187 [2024-04-27 02:46:04.637500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.187 [2024-04-27 02:46:04.637513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.187 qpair failed and we were unable to recover it. 00:26:31.187 [2024-04-27 02:46:04.647428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.187 [2024-04-27 02:46:04.647515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.187 [2024-04-27 02:46:04.647528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.187 [2024-04-27 02:46:04.647534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.647542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.647553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.657461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.657539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.657552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.657558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.657563] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.657575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.667495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.667619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.667632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.667637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.667642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.667653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.677618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.677697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.677710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.677716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.677720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.677732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.687461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.687541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.687554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.687559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.687564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.687577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.697570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.697650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.697663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.697669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.697674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.697685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.707582] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.707658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.707671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.707677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.707682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.707694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.717631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.717712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.717725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.717730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.717735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.717747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.727653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.727740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.727753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.727759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.727764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.727775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.737674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.737765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.737779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.737788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.737793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.737805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.747621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.747697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.747710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.747715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.747720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.747732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.757687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.757766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.757779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.757784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.757789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.757800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.767753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.767832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.767845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.767851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.767856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.767867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.188 [2024-04-27 02:46:04.777786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.188 [2024-04-27 02:46:04.777879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.188 [2024-04-27 02:46:04.777899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.188 [2024-04-27 02:46:04.777907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.188 [2024-04-27 02:46:04.777912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.188 [2024-04-27 02:46:04.777927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.188 qpair failed and we were unable to recover it. 00:26:31.189 [2024-04-27 02:46:04.787774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.189 [2024-04-27 02:46:04.787856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.189 [2024-04-27 02:46:04.787876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.189 [2024-04-27 02:46:04.787883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.189 [2024-04-27 02:46:04.787887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.189 [2024-04-27 02:46:04.787903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.189 qpair failed and we were unable to recover it. 00:26:31.189 [2024-04-27 02:46:04.797808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.189 [2024-04-27 02:46:04.797890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.189 [2024-04-27 02:46:04.797909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.189 [2024-04-27 02:46:04.797916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.189 [2024-04-27 02:46:04.797921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.189 [2024-04-27 02:46:04.797937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.189 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.807771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.807857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.807878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.807885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.807890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.807905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.817898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.817981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.817995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.818002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.818007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.818019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.827782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.827891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.827909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.827915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.827920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.827931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.837930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.838007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.838020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.838026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.838031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.838043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.847940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.848025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.848038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.848044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.848049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.848061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.858015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.858090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.858103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.858108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.858113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.858125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.868031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.868123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.868144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.868151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.868156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.868171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.878108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.878188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.878202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.878208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.878213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.878226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.888094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.888176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.888190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.888195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.888201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.888213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.452 qpair failed and we were unable to recover it. 00:26:31.452 [2024-04-27 02:46:04.898128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.452 [2024-04-27 02:46:04.898206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.452 [2024-04-27 02:46:04.898219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.452 [2024-04-27 02:46:04.898225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.452 [2024-04-27 02:46:04.898230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.452 [2024-04-27 02:46:04.898242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.453 qpair failed and we were unable to recover it. 00:26:31.453 [2024-04-27 02:46:04.908121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.453 [2024-04-27 02:46:04.908210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.453 [2024-04-27 02:46:04.908224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.453 [2024-04-27 02:46:04.908229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.453 [2024-04-27 02:46:04.908234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff40000b90 00:26:31.453 [2024-04-27 02:46:04.908246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.453 qpair failed and we were unable to recover it. 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 [2024-04-27 02:46:04.908664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:31.453 [2024-04-27 02:46:04.918184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.453 [2024-04-27 02:46:04.918302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.453 [2024-04-27 02:46:04.918325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.453 [2024-04-27 02:46:04.918334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.453 [2024-04-27 02:46:04.918341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff38000b90 00:26:31.453 [2024-04-27 02:46:04.918358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:31.453 qpair failed and we were unable to recover it. 00:26:31.453 [2024-04-27 02:46:04.928194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.453 [2024-04-27 02:46:04.928301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.453 [2024-04-27 02:46:04.928321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.453 [2024-04-27 02:46:04.928329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.453 [2024-04-27 02:46:04.928335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff38000b90 00:26:31.453 [2024-04-27 02:46:04.928352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:31.453 qpair failed and we were unable to recover it. 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Write completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 [2024-04-27 02:46:04.928748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.453 [2024-04-27 02:46:04.938193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.453 [2024-04-27 02:46:04.938302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.453 [2024-04-27 02:46:04.938324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.453 [2024-04-27 02:46:04.938333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.453 [2024-04-27 02:46:04.938339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff48000b90 00:26:31.453 [2024-04-27 02:46:04.938358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.453 qpair failed and we were unable to recover it. 00:26:31.453 [2024-04-27 02:46:04.948251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.453 [2024-04-27 02:46:04.948364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.453 [2024-04-27 02:46:04.948391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.453 [2024-04-27 02:46:04.948400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.453 [2024-04-27 02:46:04.948407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff48000b90 00:26:31.453 [2024-04-27 02:46:04.948428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.453 qpair failed and we were unable to recover it. 00:26:31.453 [2024-04-27 02:46:04.948698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b7f50 is same with the state(5) to be set 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.453 Read completed with error (sct=0, sc=8) 00:26:31.453 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Write completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 Read completed with error (sct=0, sc=8) 00:26:31.454 starting I/O failed 00:26:31.454 [2024-04-27 02:46:04.948985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.454 [2024-04-27 02:46:04.958283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.454 [2024-04-27 02:46:04.958403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.454 [2024-04-27 02:46:04.958430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.454 [2024-04-27 02:46:04.958439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.454 [2024-04-27 02:46:04.958446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19c1480 00:26:31.454 [2024-04-27 02:46:04.958466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.454 qpair failed and we were unable to recover it. 00:26:31.454 [2024-04-27 02:46:04.968304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.454 [2024-04-27 02:46:04.968417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.454 [2024-04-27 02:46:04.968435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.454 [2024-04-27 02:46:04.968443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.454 [2024-04-27 02:46:04.968449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19c1480 00:26:31.454 [2024-04-27 02:46:04.968466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.454 qpair failed and we were unable to recover it. 00:26:31.454 [2024-04-27 02:46:04.968817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b7f50 (9): Bad file descriptor 00:26:31.454 Initializing NVMe Controllers 00:26:31.454 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:31.454 Initialization complete. Launching workers. 00:26:31.454 Starting thread on core 1 00:26:31.454 Starting thread on core 2 00:26:31.454 Starting thread on core 3 00:26:31.454 Starting thread on core 0 00:26:31.454 02:46:04 -- host/target_disconnect.sh@59 -- # sync 00:26:31.454 00:26:31.454 real 0m11.422s 00:26:31.454 user 0m20.318s 00:26:31.454 sys 0m4.188s 00:26:31.454 02:46:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:31.454 02:46:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.454 ************************************ 00:26:31.454 END TEST nvmf_target_disconnect_tc2 00:26:31.454 ************************************ 00:26:31.454 02:46:05 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:31.454 02:46:05 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:31.454 02:46:05 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:31.454 02:46:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:31.454 02:46:05 -- nvmf/common.sh@117 -- # sync 00:26:31.454 02:46:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:31.454 02:46:05 -- nvmf/common.sh@120 -- # set +e 00:26:31.454 02:46:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:31.454 02:46:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:31.454 rmmod nvme_tcp 00:26:31.454 rmmod nvme_fabrics 00:26:31.715 rmmod nvme_keyring 00:26:31.715 02:46:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:31.715 02:46:05 -- nvmf/common.sh@124 -- # set -e 00:26:31.715 02:46:05 -- nvmf/common.sh@125 -- # return 0 00:26:31.715 02:46:05 -- nvmf/common.sh@478 -- # '[' -n 282096 ']' 00:26:31.715 02:46:05 -- nvmf/common.sh@479 -- # killprocess 282096 00:26:31.715 02:46:05 -- common/autotest_common.sh@936 -- # '[' -z 282096 ']' 00:26:31.715 02:46:05 -- common/autotest_common.sh@940 -- # kill -0 282096 00:26:31.715 02:46:05 -- common/autotest_common.sh@941 -- # uname 00:26:31.715 02:46:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:31.715 02:46:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 282096 00:26:31.715 02:46:05 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:31.715 02:46:05 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:31.715 02:46:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 282096' 00:26:31.715 killing process with pid 282096 00:26:31.715 02:46:05 -- common/autotest_common.sh@955 -- # kill 282096 00:26:31.715 02:46:05 -- common/autotest_common.sh@960 -- # wait 282096 00:26:31.715 02:46:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:31.715 02:46:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:31.715 02:46:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:31.715 02:46:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.715 02:46:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:31.715 02:46:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.715 02:46:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.715 02:46:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.262 02:46:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.262 00:26:34.262 real 0m20.968s 00:26:34.262 user 0m47.985s 00:26:34.262 sys 0m9.696s 00:26:34.262 02:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:34.262 02:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.262 ************************************ 00:26:34.262 END TEST nvmf_target_disconnect 00:26:34.262 ************************************ 00:26:34.262 02:46:07 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:34.262 02:46:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:34.262 02:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.263 02:46:07 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:34.263 00:26:34.263 real 19m25.493s 00:26:34.263 user 40m16.935s 00:26:34.263 sys 6m26.653s 00:26:34.263 02:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:34.263 02:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.263 ************************************ 00:26:34.263 END TEST nvmf_tcp 00:26:34.263 ************************************ 00:26:34.263 02:46:07 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:34.263 02:46:07 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:34.263 02:46:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:34.263 02:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.263 02:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.263 ************************************ 00:26:34.263 START TEST spdkcli_nvmf_tcp 00:26:34.263 ************************************ 00:26:34.263 02:46:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:34.263 * Looking for test storage... 00:26:34.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:34.263 02:46:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:34.263 02:46:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.263 02:46:07 -- nvmf/common.sh@7 -- # uname -s 00:26:34.263 02:46:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.263 02:46:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.263 02:46:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.263 02:46:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.263 02:46:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.263 02:46:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.263 02:46:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.263 02:46:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.263 02:46:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.263 02:46:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.263 02:46:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.263 02:46:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.263 02:46:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.263 02:46:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.263 02:46:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.263 02:46:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.263 02:46:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.263 02:46:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.263 02:46:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.263 02:46:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.263 02:46:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.263 02:46:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.263 02:46:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.263 02:46:07 -- paths/export.sh@5 -- # export PATH 00:26:34.263 02:46:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.263 02:46:07 -- nvmf/common.sh@47 -- # : 0 00:26:34.263 02:46:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.263 02:46:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.263 02:46:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.263 02:46:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.263 02:46:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.263 02:46:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.263 02:46:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.263 02:46:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:34.263 02:46:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:34.263 02:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.263 02:46:07 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:34.263 02:46:07 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=284082 00:26:34.263 02:46:07 -- spdkcli/common.sh@34 -- # waitforlisten 284082 00:26:34.263 02:46:07 -- common/autotest_common.sh@817 -- # '[' -z 284082 ']' 00:26:34.263 02:46:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.263 02:46:07 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:34.263 02:46:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:34.263 02:46:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.263 02:46:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:34.263 02:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.263 [2024-04-27 02:46:07.733780] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:34.263 [2024-04-27 02:46:07.733827] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284082 ] 00:26:34.263 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.263 [2024-04-27 02:46:07.792712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:34.263 [2024-04-27 02:46:07.856629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.263 [2024-04-27 02:46:07.856634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.206 02:46:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:35.206 02:46:08 -- common/autotest_common.sh@850 -- # return 0 00:26:35.206 02:46:08 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:35.206 02:46:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:35.206 02:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.206 02:46:08 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:35.206 02:46:08 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:35.206 02:46:08 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:35.206 02:46:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:35.206 02:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.206 02:46:08 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:35.206 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:35.206 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:35.206 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:35.206 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:35.206 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:35.206 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:35.206 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:35.206 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:35.206 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:35.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:35.206 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:35.206 ' 00:26:35.468 [2024-04-27 02:46:08.861346] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:37.414 [2024-04-27 02:46:10.866986] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.804 [2024-04-27 02:46:12.030791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:40.716 [2024-04-27 02:46:14.173036] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:42.631 [2024-04-27 02:46:16.006585] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:44.020 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:44.020 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:44.020 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:44.020 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:44.020 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:44.020 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:44.020 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:44.020 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:44.020 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:44.020 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:44.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:44.020 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:44.020 02:46:17 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:44.020 02:46:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:44.020 02:46:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.020 02:46:17 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:44.020 02:46:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:44.020 02:46:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.020 02:46:17 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:44.020 02:46:17 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:44.593 02:46:17 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:44.593 02:46:17 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:44.593 02:46:17 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:44.593 02:46:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:44.593 02:46:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.593 02:46:17 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:44.593 02:46:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:44.593 02:46:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.593 02:46:17 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:44.593 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:44.593 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:44.593 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:44.593 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:44.593 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:44.593 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:44.593 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:44.593 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:44.593 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:44.593 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:44.593 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:44.593 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:44.593 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:44.593 ' 00:26:49.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:49.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:49.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:49.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:49.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:49.886 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:49.886 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:49.886 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:49.886 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:49.886 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:49.886 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:49.886 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:49.886 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:49.886 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:49.886 02:46:23 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:49.886 02:46:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:49.886 02:46:23 -- common/autotest_common.sh@10 -- # set +x 00:26:49.886 02:46:23 -- spdkcli/nvmf.sh@90 -- # killprocess 284082 00:26:49.886 02:46:23 -- common/autotest_common.sh@936 -- # '[' -z 284082 ']' 00:26:49.886 02:46:23 -- common/autotest_common.sh@940 -- # kill -0 284082 00:26:49.886 02:46:23 -- common/autotest_common.sh@941 -- # uname 00:26:49.886 02:46:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:49.886 02:46:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 284082 00:26:49.886 02:46:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:49.886 02:46:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:49.886 02:46:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 284082' 00:26:49.886 killing process with pid 284082 00:26:49.886 02:46:23 -- common/autotest_common.sh@955 -- # kill 284082 00:26:49.886 [2024-04-27 02:46:23.482809] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:49.886 02:46:23 -- common/autotest_common.sh@960 -- # wait 284082 00:26:50.147 02:46:23 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:50.147 02:46:23 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:50.147 02:46:23 -- spdkcli/common.sh@13 -- # '[' -n 284082 ']' 00:26:50.147 02:46:23 -- spdkcli/common.sh@14 -- # killprocess 284082 00:26:50.147 02:46:23 -- common/autotest_common.sh@936 -- # '[' -z 284082 ']' 00:26:50.147 02:46:23 -- common/autotest_common.sh@940 -- # kill -0 284082 00:26:50.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (284082) - No such process 00:26:50.147 02:46:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 284082 is not found' 00:26:50.147 Process with pid 284082 is not found 00:26:50.147 02:46:23 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:50.147 02:46:23 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:50.147 02:46:23 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:50.147 00:26:50.147 real 0m16.035s 00:26:50.147 user 0m33.763s 00:26:50.147 sys 0m0.736s 00:26:50.147 02:46:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:50.147 02:46:23 -- common/autotest_common.sh@10 -- # set +x 00:26:50.147 ************************************ 00:26:50.147 END TEST spdkcli_nvmf_tcp 00:26:50.147 ************************************ 00:26:50.147 02:46:23 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:50.147 02:46:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:50.147 02:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:50.147 02:46:23 -- common/autotest_common.sh@10 -- # set +x 00:26:50.408 ************************************ 00:26:50.408 START TEST nvmf_identify_passthru 00:26:50.408 ************************************ 00:26:50.408 02:46:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:50.408 * Looking for test storage... 00:26:50.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:50.408 02:46:23 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.408 02:46:23 -- nvmf/common.sh@7 -- # uname -s 00:26:50.408 02:46:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.408 02:46:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.408 02:46:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.408 02:46:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.408 02:46:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.408 02:46:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.408 02:46:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.408 02:46:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.408 02:46:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.408 02:46:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.408 02:46:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.408 02:46:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.408 02:46:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.408 02:46:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.408 02:46:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.408 02:46:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.408 02:46:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.408 02:46:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.408 02:46:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.408 02:46:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.408 02:46:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.408 02:46:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.408 02:46:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.408 02:46:23 -- paths/export.sh@5 -- # export PATH 00:26:50.408 02:46:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.408 02:46:23 -- nvmf/common.sh@47 -- # : 0 00:26:50.409 02:46:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.409 02:46:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.409 02:46:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.409 02:46:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.409 02:46:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.409 02:46:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.409 02:46:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.409 02:46:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.409 02:46:23 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.409 02:46:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.409 02:46:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.409 02:46:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.409 02:46:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 02:46:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 02:46:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 02:46:23 -- paths/export.sh@5 -- # export PATH 00:26:50.409 02:46:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.409 02:46:23 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:50.409 02:46:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:50.409 02:46:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.409 02:46:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:50.409 02:46:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:50.409 02:46:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:50.409 02:46:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.409 02:46:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:50.409 02:46:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.409 02:46:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:50.409 02:46:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:50.409 02:46:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.409 02:46:23 -- common/autotest_common.sh@10 -- # set +x 00:26:56.996 02:46:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:56.996 02:46:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.996 02:46:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.996 02:46:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.996 02:46:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.996 02:46:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.996 02:46:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.996 02:46:30 -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.996 02:46:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.996 02:46:30 -- nvmf/common.sh@296 -- # e810=() 00:26:56.996 02:46:30 -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.996 02:46:30 -- nvmf/common.sh@297 -- # x722=() 00:26:56.996 02:46:30 -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.996 02:46:30 -- nvmf/common.sh@298 -- # mlx=() 00:26:56.996 02:46:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.996 02:46:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.996 02:46:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.996 02:46:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.996 02:46:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.996 02:46:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.996 02:46:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:56.996 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:56.996 02:46:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.996 02:46:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:56.996 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:56.996 02:46:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.996 02:46:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.996 02:46:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.996 02:46:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:56.996 02:46:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.996 02:46:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:56.996 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:56.996 02:46:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.996 02:46:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.996 02:46:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.996 02:46:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:56.996 02:46:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.996 02:46:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:56.996 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:56.996 02:46:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.996 02:46:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:56.996 02:46:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:56.996 02:46:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:56.996 02:46:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:56.996 02:46:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.996 02:46:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.996 02:46:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.996 02:46:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.996 02:46:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.996 02:46:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.996 02:46:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.996 02:46:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.996 02:46:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.996 02:46:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.996 02:46:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.996 02:46:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.996 02:46:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.996 02:46:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.996 02:46:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.996 02:46:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.996 02:46:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.996 02:46:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.996 02:46:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.996 02:46:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:26:56.996 00:26:56.996 --- 10.0.0.2 ping statistics --- 00:26:56.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.996 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:26:56.997 02:46:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:26:56.997 00:26:56.997 --- 10.0.0.1 ping statistics --- 00:26:56.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.997 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:26:56.997 02:46:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.997 02:46:30 -- nvmf/common.sh@411 -- # return 0 00:26:56.997 02:46:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:56.997 02:46:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.997 02:46:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:56.997 02:46:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:56.997 02:46:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.997 02:46:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:56.997 02:46:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:56.997 02:46:30 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:56.997 02:46:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:56.997 02:46:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.997 02:46:30 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:56.997 02:46:30 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:56.997 02:46:30 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:56.997 02:46:30 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:56.997 02:46:30 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:56.997 02:46:30 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:56.997 02:46:30 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:56.997 02:46:30 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:56.997 02:46:30 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:56.997 02:46:30 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:56.997 02:46:30 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:56.997 02:46:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:26:56.997 02:46:30 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:26:56.997 02:46:30 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:26:56.997 02:46:30 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:26:56.997 02:46:30 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:56.997 02:46:30 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:56.997 02:46:30 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:57.257 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.518 02:46:31 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:26:57.518 02:46:31 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:57.518 02:46:31 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:57.518 02:46:31 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:57.518 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.090 02:46:31 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:26:58.090 02:46:31 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:58.090 02:46:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:58.090 02:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:58.090 02:46:31 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:58.090 02:46:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:58.090 02:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:58.090 02:46:31 -- target/identify_passthru.sh@31 -- # nvmfpid=290993 00:26:58.090 02:46:31 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:58.090 02:46:31 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:58.090 02:46:31 -- target/identify_passthru.sh@35 -- # waitforlisten 290993 00:26:58.090 02:46:31 -- common/autotest_common.sh@817 -- # '[' -z 290993 ']' 00:26:58.090 02:46:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.090 02:46:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:58.090 02:46:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.090 02:46:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:58.090 02:46:31 -- common/autotest_common.sh@10 -- # set +x 00:26:58.090 [2024-04-27 02:46:31.592428] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:58.090 [2024-04-27 02:46:31.592486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.090 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.090 [2024-04-27 02:46:31.657258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.351 [2024-04-27 02:46:31.720198] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.351 [2024-04-27 02:46:31.720240] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.351 [2024-04-27 02:46:31.720249] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.351 [2024-04-27 02:46:31.720257] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.351 [2024-04-27 02:46:31.720264] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.351 [2024-04-27 02:46:31.720458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.351 [2024-04-27 02:46:31.720634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.351 [2024-04-27 02:46:31.720762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.351 [2024-04-27 02:46:31.720765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.921 02:46:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:58.921 02:46:32 -- common/autotest_common.sh@850 -- # return 0 00:26:58.922 02:46:32 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:58.922 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.922 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.922 INFO: Log level set to 20 00:26:58.922 INFO: Requests: 00:26:58.922 { 00:26:58.922 "jsonrpc": "2.0", 00:26:58.922 "method": "nvmf_set_config", 00:26:58.922 "id": 1, 00:26:58.922 "params": { 00:26:58.922 "admin_cmd_passthru": { 00:26:58.922 "identify_ctrlr": true 00:26:58.922 } 00:26:58.922 } 00:26:58.922 } 00:26:58.922 00:26:58.922 INFO: response: 00:26:58.922 { 00:26:58.922 "jsonrpc": "2.0", 00:26:58.922 "id": 1, 00:26:58.922 "result": true 00:26:58.922 } 00:26:58.922 00:26:58.922 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.922 02:46:32 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:58.922 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.922 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.922 INFO: Setting log level to 20 00:26:58.922 INFO: Setting log level to 20 00:26:58.922 INFO: Log level set to 20 00:26:58.922 INFO: Log level set to 20 00:26:58.922 INFO: Requests: 00:26:58.922 { 00:26:58.922 "jsonrpc": "2.0", 00:26:58.922 "method": "framework_start_init", 00:26:58.922 "id": 1 00:26:58.922 } 00:26:58.922 00:26:58.922 INFO: Requests: 00:26:58.922 { 00:26:58.922 "jsonrpc": "2.0", 00:26:58.922 "method": "framework_start_init", 00:26:58.922 "id": 1 00:26:58.922 } 00:26:58.922 00:26:58.922 [2024-04-27 02:46:32.449018] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:58.922 INFO: response: 00:26:58.922 { 00:26:58.922 "jsonrpc": "2.0", 00:26:58.922 "id": 1, 00:26:58.922 "result": true 00:26:58.922 } 00:26:58.922 00:26:58.922 INFO: response: 00:26:58.922 { 00:26:58.922 "jsonrpc": "2.0", 00:26:58.922 "id": 1, 00:26:58.922 "result": true 00:26:58.922 } 00:26:58.922 00:26:58.922 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.922 02:46:32 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.922 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.922 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.922 INFO: Setting log level to 40 00:26:58.922 INFO: Setting log level to 40 00:26:58.922 INFO: Setting log level to 40 00:26:58.922 [2024-04-27 02:46:32.462262] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.922 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.922 02:46:32 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:58.922 02:46:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:58.922 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:58.922 02:46:32 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:26:58.922 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.922 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 Nvme0n1 00:26:59.492 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.492 02:46:32 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:59.492 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.492 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.492 02:46:32 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:59.492 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.492 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.492 02:46:32 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.492 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.492 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 [2024-04-27 02:46:32.847636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.492 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.492 02:46:32 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:59.492 02:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.492 02:46:32 -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 [2024-04-27 02:46:32.859418] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:59.492 [ 00:26:59.492 { 00:26:59.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:59.492 "subtype": "Discovery", 00:26:59.492 "listen_addresses": [], 00:26:59.492 "allow_any_host": true, 00:26:59.492 "hosts": [] 00:26:59.492 }, 00:26:59.492 { 00:26:59.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.492 "subtype": "NVMe", 00:26:59.492 "listen_addresses": [ 00:26:59.492 { 00:26:59.492 "transport": "TCP", 00:26:59.492 "trtype": "TCP", 00:26:59.492 "adrfam": "IPv4", 00:26:59.492 "traddr": "10.0.0.2", 00:26:59.492 "trsvcid": "4420" 00:26:59.492 } 00:26:59.492 ], 00:26:59.492 "allow_any_host": true, 00:26:59.492 "hosts": [], 00:26:59.492 "serial_number": "SPDK00000000000001", 00:26:59.492 "model_number": "SPDK bdev Controller", 00:26:59.492 "max_namespaces": 1, 00:26:59.492 "min_cntlid": 1, 00:26:59.492 "max_cntlid": 65519, 00:26:59.492 "namespaces": [ 00:26:59.492 { 00:26:59.492 "nsid": 1, 00:26:59.492 "bdev_name": "Nvme0n1", 00:26:59.492 "name": "Nvme0n1", 00:26:59.492 "nguid": "3634473052605487002538450000003C", 00:26:59.492 "uuid": "36344730-5260-5487-0025-38450000003c" 00:26:59.492 } 00:26:59.492 ] 00:26:59.492 } 00:26:59.492 ] 00:26:59.492 02:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.492 02:46:32 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:59.492 02:46:32 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:59.492 02:46:32 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:59.492 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.492 02:46:33 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:26:59.492 02:46:33 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:59.492 02:46:33 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:59.492 02:46:33 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:59.753 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.753 02:46:33 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:26:59.753 02:46:33 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:26:59.753 02:46:33 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:26:59.753 02:46:33 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.753 02:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.753 02:46:33 -- common/autotest_common.sh@10 -- # set +x 00:26:59.753 02:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.753 02:46:33 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:59.753 02:46:33 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:59.753 02:46:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:59.753 02:46:33 -- nvmf/common.sh@117 -- # sync 00:26:59.753 02:46:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.753 02:46:33 -- nvmf/common.sh@120 -- # set +e 00:26:59.753 02:46:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.753 02:46:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.753 rmmod nvme_tcp 00:26:59.753 rmmod nvme_fabrics 00:26:59.753 rmmod nvme_keyring 00:26:59.753 02:46:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.753 02:46:33 -- nvmf/common.sh@124 -- # set -e 00:26:59.753 02:46:33 -- nvmf/common.sh@125 -- # return 0 00:26:59.753 02:46:33 -- nvmf/common.sh@478 -- # '[' -n 290993 ']' 00:26:59.753 02:46:33 -- nvmf/common.sh@479 -- # killprocess 290993 00:26:59.753 02:46:33 -- common/autotest_common.sh@936 -- # '[' -z 290993 ']' 00:26:59.753 02:46:33 -- common/autotest_common.sh@940 -- # kill -0 290993 00:26:59.753 02:46:33 -- common/autotest_common.sh@941 -- # uname 00:26:59.753 02:46:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:59.753 02:46:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 290993 00:26:59.753 02:46:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:59.753 02:46:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:59.753 02:46:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 290993' 00:26:59.753 killing process with pid 290993 00:26:59.753 02:46:33 -- common/autotest_common.sh@955 -- # kill 290993 00:26:59.753 [2024-04-27 02:46:33.336042] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:59.753 02:46:33 -- common/autotest_common.sh@960 -- # wait 290993 00:27:00.013 02:46:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:00.013 02:46:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:00.013 02:46:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:00.013 02:46:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.013 02:46:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.013 02:46:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.013 02:46:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:00.013 02:46:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.559 02:46:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:02.559 00:27:02.559 real 0m11.887s 00:27:02.559 user 0m9.604s 00:27:02.559 sys 0m5.554s 00:27:02.559 02:46:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:02.559 02:46:35 -- common/autotest_common.sh@10 -- # set +x 00:27:02.559 ************************************ 00:27:02.559 END TEST nvmf_identify_passthru 00:27:02.559 ************************************ 00:27:02.559 02:46:35 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:02.559 02:46:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.559 02:46:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.559 02:46:35 -- common/autotest_common.sh@10 -- # set +x 00:27:02.559 ************************************ 00:27:02.559 START TEST nvmf_dif 00:27:02.559 ************************************ 00:27:02.559 02:46:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:02.559 * Looking for test storage... 00:27:02.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:02.559 02:46:35 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.559 02:46:35 -- nvmf/common.sh@7 -- # uname -s 00:27:02.559 02:46:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.559 02:46:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.559 02:46:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.559 02:46:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.559 02:46:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.559 02:46:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.559 02:46:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.559 02:46:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.559 02:46:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.559 02:46:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.559 02:46:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:02.559 02:46:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:02.559 02:46:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.559 02:46:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.559 02:46:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.559 02:46:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.559 02:46:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.559 02:46:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.559 02:46:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.559 02:46:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.559 02:46:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.559 02:46:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.559 02:46:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.559 02:46:35 -- paths/export.sh@5 -- # export PATH 00:27:02.559 02:46:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.559 02:46:35 -- nvmf/common.sh@47 -- # : 0 00:27:02.559 02:46:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.559 02:46:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.559 02:46:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.559 02:46:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.559 02:46:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.559 02:46:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.559 02:46:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.559 02:46:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.559 02:46:35 -- target/dif.sh@15 -- # NULL_META=16 00:27:02.560 02:46:35 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:02.560 02:46:35 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:02.560 02:46:35 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:02.560 02:46:35 -- target/dif.sh@135 -- # nvmftestinit 00:27:02.560 02:46:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:02.560 02:46:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.560 02:46:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:02.560 02:46:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:02.560 02:46:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:02.560 02:46:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.560 02:46:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:02.560 02:46:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.560 02:46:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:02.560 02:46:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:02.560 02:46:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.560 02:46:35 -- common/autotest_common.sh@10 -- # set +x 00:27:09.149 02:46:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:09.149 02:46:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.149 02:46:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.149 02:46:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.149 02:46:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.149 02:46:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.149 02:46:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.149 02:46:42 -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.149 02:46:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.149 02:46:42 -- nvmf/common.sh@296 -- # e810=() 00:27:09.149 02:46:42 -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.149 02:46:42 -- nvmf/common.sh@297 -- # x722=() 00:27:09.149 02:46:42 -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.149 02:46:42 -- nvmf/common.sh@298 -- # mlx=() 00:27:09.149 02:46:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.149 02:46:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.149 02:46:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.149 02:46:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.149 02:46:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.149 02:46:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.149 02:46:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:09.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:09.149 02:46:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.149 02:46:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:09.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:09.149 02:46:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.149 02:46:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.149 02:46:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.149 02:46:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:09.149 02:46:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.149 02:46:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:09.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:09.149 02:46:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.149 02:46:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.149 02:46:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.149 02:46:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:09.149 02:46:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.149 02:46:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:09.149 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:09.149 02:46:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.149 02:46:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:09.149 02:46:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:09.149 02:46:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:09.149 02:46:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:09.149 02:46:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.149 02:46:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.149 02:46:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.149 02:46:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:09.149 02:46:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.149 02:46:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.149 02:46:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:09.149 02:46:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.149 02:46:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.149 02:46:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:09.149 02:46:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:09.149 02:46:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.149 02:46:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.149 02:46:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.149 02:46:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.149 02:46:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:09.149 02:46:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.149 02:46:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.149 02:46:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.150 02:46:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:09.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:27:09.150 00:27:09.150 --- 10.0.0.2 ping statistics --- 00:27:09.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.150 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:27:09.150 02:46:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:27:09.150 00:27:09.150 --- 10.0.0.1 ping statistics --- 00:27:09.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.150 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:27:09.150 02:46:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.150 02:46:42 -- nvmf/common.sh@411 -- # return 0 00:27:09.150 02:46:42 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:09.150 02:46:42 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:12.456 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:12.456 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:12.456 02:46:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.456 02:46:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:12.456 02:46:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:12.456 02:46:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.456 02:46:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:12.456 02:46:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:12.456 02:46:45 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:12.456 02:46:45 -- target/dif.sh@137 -- # nvmfappstart 00:27:12.456 02:46:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:12.456 02:46:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:12.456 02:46:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.456 02:46:45 -- nvmf/common.sh@470 -- # nvmfpid=296851 00:27:12.456 02:46:45 -- nvmf/common.sh@471 -- # waitforlisten 296851 00:27:12.456 02:46:45 -- common/autotest_common.sh@817 -- # '[' -z 296851 ']' 00:27:12.456 02:46:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.456 02:46:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:12.456 02:46:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.456 02:46:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:12.456 02:46:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.456 02:46:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:12.456 [2024-04-27 02:46:45.896709] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:12.456 [2024-04-27 02:46:45.896755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.456 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.456 [2024-04-27 02:46:45.960542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.456 [2024-04-27 02:46:46.023541] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.457 [2024-04-27 02:46:46.023578] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.457 [2024-04-27 02:46:46.023585] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.457 [2024-04-27 02:46:46.023591] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.457 [2024-04-27 02:46:46.023597] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.457 [2024-04-27 02:46:46.023616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.400 02:46:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:13.400 02:46:46 -- common/autotest_common.sh@850 -- # return 0 00:27:13.400 02:46:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:13.400 02:46:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 02:46:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.400 02:46:46 -- target/dif.sh@139 -- # create_transport 00:27:13.400 02:46:46 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:13.400 02:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 [2024-04-27 02:46:46.722269] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.400 02:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.400 02:46:46 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:13.400 02:46:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:13.400 02:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 ************************************ 00:27:13.400 START TEST fio_dif_1_default 00:27:13.400 ************************************ 00:27:13.400 02:46:46 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:13.400 02:46:46 -- target/dif.sh@86 -- # create_subsystems 0 00:27:13.400 02:46:46 -- target/dif.sh@28 -- # local sub 00:27:13.400 02:46:46 -- target/dif.sh@30 -- # for sub in "$@" 00:27:13.400 02:46:46 -- target/dif.sh@31 -- # create_subsystem 0 00:27:13.400 02:46:46 -- target/dif.sh@18 -- # local sub_id=0 00:27:13.400 02:46:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:13.400 02:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 bdev_null0 00:27:13.400 02:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.400 02:46:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:13.400 02:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 02:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.400 02:46:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:13.400 02:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 02:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.400 02:46:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:13.400 02:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.400 02:46:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.400 [2024-04-27 02:46:46.854719] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.400 02:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.400 02:46:46 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:13.400 02:46:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:13.400 02:46:46 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:13.400 02:46:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:13.400 02:46:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:13.400 02:46:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:13.400 02:46:46 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:13.400 02:46:46 -- common/autotest_common.sh@1327 -- # shift 00:27:13.400 02:46:46 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:13.400 02:46:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:13.400 02:46:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:13.401 02:46:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:13.401 02:46:46 -- nvmf/common.sh@521 -- # config=() 00:27:13.401 02:46:46 -- target/dif.sh@82 -- # gen_fio_conf 00:27:13.401 02:46:46 -- nvmf/common.sh@521 -- # local subsystem config 00:27:13.401 02:46:46 -- target/dif.sh@54 -- # local file 00:27:13.401 02:46:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:13.401 02:46:46 -- target/dif.sh@56 -- # cat 00:27:13.401 02:46:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:13.401 { 00:27:13.401 "params": { 00:27:13.401 "name": "Nvme$subsystem", 00:27:13.401 "trtype": "$TEST_TRANSPORT", 00:27:13.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.401 "adrfam": "ipv4", 00:27:13.401 "trsvcid": "$NVMF_PORT", 00:27:13.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.401 "hdgst": ${hdgst:-false}, 00:27:13.401 "ddgst": ${ddgst:-false} 00:27:13.401 }, 00:27:13.401 "method": "bdev_nvme_attach_controller" 00:27:13.401 } 00:27:13.401 EOF 00:27:13.401 )") 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:13.401 02:46:46 -- nvmf/common.sh@543 -- # cat 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:13.401 02:46:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:13.401 02:46:46 -- target/dif.sh@72 -- # (( file <= files )) 00:27:13.401 02:46:46 -- nvmf/common.sh@545 -- # jq . 00:27:13.401 02:46:46 -- nvmf/common.sh@546 -- # IFS=, 00:27:13.401 02:46:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:13.401 "params": { 00:27:13.401 "name": "Nvme0", 00:27:13.401 "trtype": "tcp", 00:27:13.401 "traddr": "10.0.0.2", 00:27:13.401 "adrfam": "ipv4", 00:27:13.401 "trsvcid": "4420", 00:27:13.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:13.401 "hdgst": false, 00:27:13.401 "ddgst": false 00:27:13.401 }, 00:27:13.401 "method": "bdev_nvme_attach_controller" 00:27:13.401 }' 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:13.401 02:46:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:13.401 02:46:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:13.401 02:46:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:13.401 02:46:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:13.401 02:46:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:13.401 02:46:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:13.661 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:13.661 fio-3.35 00:27:13.661 Starting 1 thread 00:27:13.922 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.159 00:27:26.159 filename0: (groupid=0, jobs=1): err= 0: pid=297435: Sat Apr 27 02:46:57 2024 00:27:26.159 read: IOPS=186, BW=746KiB/s (764kB/s)(7472KiB/10014msec) 00:27:26.159 slat (nsec): min=5331, max=55214, avg=6214.85, stdev=2116.38 00:27:26.159 clat (usec): min=776, max=42129, avg=21425.78, stdev=20338.93 00:27:26.159 lat (usec): min=784, max=42137, avg=21432.00, stdev=20338.86 00:27:26.159 clat percentiles (usec): 00:27:26.159 | 1.00th=[ 955], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1139], 00:27:26.159 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1729], 60.00th=[41681], 00:27:26.159 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:27:26.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:26.159 | 99.99th=[42206] 00:27:26.159 bw ( KiB/s): min= 704, max= 768, per=99.85%, avg=745.60, stdev=29.55, samples=20 00:27:26.159 iops : min= 176, max= 192, avg=186.40, stdev= 7.39, samples=20 00:27:26.159 lat (usec) : 1000=1.12% 00:27:26.159 lat (msec) : 2=48.98%, 50=49.89% 00:27:26.159 cpu : usr=95.27%, sys=4.53%, ctx=16, majf=0, minf=276 00:27:26.159 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.159 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.159 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:26.159 00:27:26.159 Run status group 0 (all jobs): 00:27:26.159 READ: bw=746KiB/s (764kB/s), 746KiB/s-746KiB/s (764kB/s-764kB/s), io=7472KiB (7651kB), run=10014-10014msec 00:27:26.159 02:46:58 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:26.159 02:46:58 -- target/dif.sh@43 -- # local sub 00:27:26.159 02:46:58 -- target/dif.sh@45 -- # for sub in "$@" 00:27:26.159 02:46:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:26.159 02:46:58 -- target/dif.sh@36 -- # local sub_id=0 00:27:26.159 02:46:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 00:27:26.159 real 0m11.209s 00:27:26.159 user 0m27.757s 00:27:26.159 sys 0m0.756s 00:27:26.159 02:46:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 ************************************ 00:27:26.159 END TEST fio_dif_1_default 00:27:26.159 ************************************ 00:27:26.159 02:46:58 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:26.159 02:46:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:26.159 02:46:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 ************************************ 00:27:26.159 START TEST fio_dif_1_multi_subsystems 00:27:26.159 ************************************ 00:27:26.159 02:46:58 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:26.159 02:46:58 -- target/dif.sh@92 -- # local files=1 00:27:26.159 02:46:58 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:26.159 02:46:58 -- target/dif.sh@28 -- # local sub 00:27:26.159 02:46:58 -- target/dif.sh@30 -- # for sub in "$@" 00:27:26.159 02:46:58 -- target/dif.sh@31 -- # create_subsystem 0 00:27:26.159 02:46:58 -- target/dif.sh@18 -- # local sub_id=0 00:27:26.159 02:46:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 bdev_null0 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 [2024-04-27 02:46:58.239375] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@30 -- # for sub in "$@" 00:27:26.159 02:46:58 -- target/dif.sh@31 -- # create_subsystem 1 00:27:26.159 02:46:58 -- target/dif.sh@18 -- # local sub_id=1 00:27:26.159 02:46:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 bdev_null1 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.159 02:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.159 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:27:26.159 02:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.159 02:46:58 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:26.159 02:46:58 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:26.159 02:46:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:26.159 02:46:58 -- nvmf/common.sh@521 -- # config=() 00:27:26.159 02:46:58 -- nvmf/common.sh@521 -- # local subsystem config 00:27:26.159 02:46:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:26.159 02:46:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:26.159 { 00:27:26.159 "params": { 00:27:26.159 "name": "Nvme$subsystem", 00:27:26.159 "trtype": "$TEST_TRANSPORT", 00:27:26.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.159 "adrfam": "ipv4", 00:27:26.159 "trsvcid": "$NVMF_PORT", 00:27:26.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.159 "hdgst": ${hdgst:-false}, 00:27:26.159 "ddgst": ${ddgst:-false} 00:27:26.159 }, 00:27:26.160 "method": "bdev_nvme_attach_controller" 00:27:26.160 } 00:27:26.160 EOF 00:27:26.160 )") 00:27:26.160 02:46:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.160 02:46:58 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.160 02:46:58 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:26.160 02:46:58 -- target/dif.sh@82 -- # gen_fio_conf 00:27:26.160 02:46:58 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:26.160 02:46:58 -- target/dif.sh@54 -- # local file 00:27:26.160 02:46:58 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:26.160 02:46:58 -- target/dif.sh@56 -- # cat 00:27:26.160 02:46:58 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:26.160 02:46:58 -- common/autotest_common.sh@1327 -- # shift 00:27:26.160 02:46:58 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:26.160 02:46:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.160 02:46:58 -- nvmf/common.sh@543 -- # cat 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:26.160 02:46:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:26.160 02:46:58 -- target/dif.sh@72 -- # (( file <= files )) 00:27:26.160 02:46:58 -- target/dif.sh@73 -- # cat 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:26.160 02:46:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:26.160 02:46:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:26.160 { 00:27:26.160 "params": { 00:27:26.160 "name": "Nvme$subsystem", 00:27:26.160 "trtype": "$TEST_TRANSPORT", 00:27:26.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.160 "adrfam": "ipv4", 00:27:26.160 "trsvcid": "$NVMF_PORT", 00:27:26.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.160 "hdgst": ${hdgst:-false}, 00:27:26.160 "ddgst": ${ddgst:-false} 00:27:26.160 }, 00:27:26.160 "method": "bdev_nvme_attach_controller" 00:27:26.160 } 00:27:26.160 EOF 00:27:26.160 )") 00:27:26.160 02:46:58 -- target/dif.sh@72 -- # (( file++ )) 00:27:26.160 02:46:58 -- target/dif.sh@72 -- # (( file <= files )) 00:27:26.160 02:46:58 -- nvmf/common.sh@543 -- # cat 00:27:26.160 02:46:58 -- nvmf/common.sh@545 -- # jq . 00:27:26.160 02:46:58 -- nvmf/common.sh@546 -- # IFS=, 00:27:26.160 02:46:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:26.160 "params": { 00:27:26.160 "name": "Nvme0", 00:27:26.160 "trtype": "tcp", 00:27:26.160 "traddr": "10.0.0.2", 00:27:26.160 "adrfam": "ipv4", 00:27:26.160 "trsvcid": "4420", 00:27:26.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:26.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:26.160 "hdgst": false, 00:27:26.160 "ddgst": false 00:27:26.160 }, 00:27:26.160 "method": "bdev_nvme_attach_controller" 00:27:26.160 },{ 00:27:26.160 "params": { 00:27:26.160 "name": "Nvme1", 00:27:26.160 "trtype": "tcp", 00:27:26.160 "traddr": "10.0.0.2", 00:27:26.160 "adrfam": "ipv4", 00:27:26.160 "trsvcid": "4420", 00:27:26.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.160 "hdgst": false, 00:27:26.160 "ddgst": false 00:27:26.160 }, 00:27:26.160 "method": "bdev_nvme_attach_controller" 00:27:26.160 }' 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:26.160 02:46:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:26.160 02:46:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:26.160 02:46:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:26.160 02:46:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:26.160 02:46:58 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:26.160 02:46:58 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.160 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:26.160 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:26.160 fio-3.35 00:27:26.160 Starting 2 threads 00:27:26.160 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.285 00:27:36.285 filename0: (groupid=0, jobs=1): err= 0: pid=299909: Sat Apr 27 02:47:09 2024 00:27:36.285 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:27:36.285 slat (nsec): min=5328, max=47993, avg=6453.91, stdev=2069.23 00:27:36.285 clat (usec): min=41762, max=43477, avg=42001.17, stdev=154.45 00:27:36.285 lat (usec): min=41775, max=43525, avg=42007.62, stdev=155.03 00:27:36.285 clat percentiles (usec): 00:27:36.285 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:27:36.285 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:36.285 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:36.285 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:36.285 | 99.99th=[43254] 00:27:36.285 bw ( KiB/s): min= 352, max= 384, per=34.38%, avg=380.63, stdev=10.09, samples=19 00:27:36.285 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:27:36.285 lat (msec) : 50=100.00% 00:27:36.285 cpu : usr=96.95%, sys=2.85%, ctx=19, majf=0, minf=159 00:27:36.285 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.286 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.286 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:36.286 filename1: (groupid=0, jobs=1): err= 0: pid=299910: Sat Apr 27 02:47:09 2024 00:27:36.286 read: IOPS=181, BW=726KiB/s (743kB/s)(7280KiB/10033msec) 00:27:36.286 slat (nsec): min=5321, max=32929, avg=6119.04, stdev=1389.83 00:27:36.286 clat (usec): min=1461, max=44390, avg=22032.50, stdev=20264.12 00:27:36.286 lat (usec): min=1466, max=44423, avg=22038.62, stdev=20264.09 00:27:36.286 clat percentiles (usec): 00:27:36.286 | 1.00th=[ 1614], 5.00th=[ 1680], 10.00th=[ 1696], 20.00th=[ 1713], 00:27:36.286 | 30.00th=[ 1729], 40.00th=[ 1745], 50.00th=[41681], 60.00th=[42206], 00:27:36.286 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:36.286 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:27:36.286 | 99.99th=[44303] 00:27:36.286 bw ( KiB/s): min= 704, max= 768, per=65.69%, avg=726.40, stdev=31.32, samples=20 00:27:36.286 iops : min= 176, max= 192, avg=181.60, stdev= 7.83, samples=20 00:27:36.286 lat (msec) : 2=48.68%, 4=1.21%, 50=50.11% 00:27:36.286 cpu : usr=96.94%, sys=2.85%, ctx=16, majf=0, minf=28 00:27:36.286 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.286 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.286 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:36.286 00:27:36.286 Run status group 0 (all jobs): 00:27:36.286 READ: bw=1105KiB/s (1132kB/s), 381KiB/s-726KiB/s (390kB/s-743kB/s), io=10.8MiB (11.4MB), run=10001-10033msec 00:27:36.286 02:47:09 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:36.286 02:47:09 -- target/dif.sh@43 -- # local sub 00:27:36.286 02:47:09 -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.286 02:47:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:36.286 02:47:09 -- target/dif.sh@36 -- # local sub_id=0 00:27:36.286 02:47:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.286 02:47:09 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:36.286 02:47:09 -- target/dif.sh@36 -- # local sub_id=1 00:27:36.286 02:47:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 00:27:36.286 real 0m11.320s 00:27:36.286 user 0m33.214s 00:27:36.286 sys 0m0.885s 00:27:36.286 02:47:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 ************************************ 00:27:36.286 END TEST fio_dif_1_multi_subsystems 00:27:36.286 ************************************ 00:27:36.286 02:47:09 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:36.286 02:47:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:36.286 02:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 ************************************ 00:27:36.286 START TEST fio_dif_rand_params 00:27:36.286 ************************************ 00:27:36.286 02:47:09 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:36.286 02:47:09 -- target/dif.sh@100 -- # local NULL_DIF 00:27:36.286 02:47:09 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:36.286 02:47:09 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:36.286 02:47:09 -- target/dif.sh@103 -- # bs=128k 00:27:36.286 02:47:09 -- target/dif.sh@103 -- # numjobs=3 00:27:36.286 02:47:09 -- target/dif.sh@103 -- # iodepth=3 00:27:36.286 02:47:09 -- target/dif.sh@103 -- # runtime=5 00:27:36.286 02:47:09 -- target/dif.sh@105 -- # create_subsystems 0 00:27:36.286 02:47:09 -- target/dif.sh@28 -- # local sub 00:27:36.286 02:47:09 -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.286 02:47:09 -- target/dif.sh@31 -- # create_subsystem 0 00:27:36.286 02:47:09 -- target/dif.sh@18 -- # local sub_id=0 00:27:36.286 02:47:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 bdev_null0 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:36.286 02:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.286 02:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 [2024-04-27 02:47:09.729135] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.286 02:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.286 02:47:09 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:36.286 02:47:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.286 02:47:09 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.286 02:47:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:36.286 02:47:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:36.286 02:47:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:36.286 02:47:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.286 02:47:09 -- common/autotest_common.sh@1327 -- # shift 00:27:36.286 02:47:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:36.286 02:47:09 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:36.287 02:47:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.287 02:47:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:36.287 02:47:09 -- target/dif.sh@82 -- # gen_fio_conf 00:27:36.287 02:47:09 -- nvmf/common.sh@521 -- # config=() 00:27:36.287 02:47:09 -- target/dif.sh@54 -- # local file 00:27:36.287 02:47:09 -- nvmf/common.sh@521 -- # local subsystem config 00:27:36.287 02:47:09 -- target/dif.sh@56 -- # cat 00:27:36.287 02:47:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:36.287 02:47:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:36.287 { 00:27:36.287 "params": { 00:27:36.287 "name": "Nvme$subsystem", 00:27:36.287 "trtype": "$TEST_TRANSPORT", 00:27:36.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.287 "adrfam": "ipv4", 00:27:36.287 "trsvcid": "$NVMF_PORT", 00:27:36.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.287 "hdgst": ${hdgst:-false}, 00:27:36.287 "ddgst": ${ddgst:-false} 00:27:36.287 }, 00:27:36.287 "method": "bdev_nvme_attach_controller" 00:27:36.287 } 00:27:36.287 EOF 00:27:36.287 )") 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:36.287 02:47:09 -- nvmf/common.sh@543 -- # cat 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:36.287 02:47:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:36.287 02:47:09 -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.287 02:47:09 -- nvmf/common.sh@545 -- # jq . 00:27:36.287 02:47:09 -- nvmf/common.sh@546 -- # IFS=, 00:27:36.287 02:47:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:36.287 "params": { 00:27:36.287 "name": "Nvme0", 00:27:36.287 "trtype": "tcp", 00:27:36.287 "traddr": "10.0.0.2", 00:27:36.287 "adrfam": "ipv4", 00:27:36.287 "trsvcid": "4420", 00:27:36.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.287 "hdgst": false, 00:27:36.287 "ddgst": false 00:27:36.287 }, 00:27:36.287 "method": "bdev_nvme_attach_controller" 00:27:36.287 }' 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:36.287 02:47:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:36.287 02:47:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:36.287 02:47:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:36.287 02:47:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:36.287 02:47:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:36.287 02:47:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.548 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:36.548 ... 00:27:36.548 fio-3.35 00:27:36.548 Starting 3 threads 00:27:36.809 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.404 00:27:43.404 filename0: (groupid=0, jobs=1): err= 0: pid=302270: Sat Apr 27 02:47:15 2024 00:27:43.404 read: IOPS=129, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5006msec) 00:27:43.404 slat (nsec): min=5343, max=30827, avg=5924.42, stdev=1228.15 00:27:43.404 clat (usec): min=6703, max=57923, avg=23161.57, stdev=19862.14 00:27:43.404 lat (usec): min=6708, max=57930, avg=23167.49, stdev=19862.25 00:27:43.404 clat percentiles (usec): 00:27:43.404 | 1.00th=[ 7177], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 8848], 00:27:43.404 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11207], 60.00th=[12125], 00:27:43.404 | 70.00th=[48497], 80.00th=[52691], 90.00th=[54789], 95.00th=[55313], 00:27:43.404 | 99.00th=[56361], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:27:43.404 | 99.99th=[57934] 00:27:43.404 bw ( KiB/s): min= 9984, max=27648, per=39.83%, avg=16512.00, stdev=4668.05, samples=10 00:27:43.404 iops : min= 78, max= 216, avg=129.00, stdev=36.47, samples=10 00:27:43.404 lat (msec) : 10=34.41%, 20=35.49%, 50=2.31%, 100=27.78% 00:27:43.404 cpu : usr=96.86%, sys=2.86%, ctx=7, majf=0, minf=95 00:27:43.404 IO depths : 1=8.3%, 2=91.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:43.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.404 issued rwts: total=648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.404 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:43.404 filename0: (groupid=0, jobs=1): err= 0: pid=302271: Sat Apr 27 02:47:15 2024 00:27:43.404 read: IOPS=101, BW=12.7MiB/s (13.3MB/s)(63.8MiB/5021msec) 00:27:43.404 slat (nsec): min=5355, max=32044, avg=7891.51, stdev=2122.68 00:27:43.404 clat (usec): min=6570, max=58922, avg=29520.56, stdev=21197.46 00:27:43.404 lat (usec): min=6578, max=58930, avg=29528.45, stdev=21197.13 00:27:43.404 clat percentiles (usec): 00:27:43.404 | 1.00th=[ 7439], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[10552], 00:27:43.404 | 30.00th=[11469], 40.00th=[12649], 50.00th=[14484], 60.00th=[51119], 00:27:43.404 | 70.00th=[53216], 80.00th=[54789], 90.00th=[55837], 95.00th=[56361], 00:27:43.404 | 99.00th=[57410], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:27:43.404 | 99.99th=[58983] 00:27:43.404 bw ( KiB/s): min= 7680, max=25344, per=31.31%, avg=12979.20, stdev=4849.83, samples=10 00:27:43.404 iops : min= 60, max= 198, avg=101.40, stdev=37.89, samples=10 00:27:43.405 lat (msec) : 10=13.73%, 20=44.51%, 50=0.39%, 100=41.37% 00:27:43.405 cpu : usr=97.13%, sys=2.53%, ctx=9, majf=0, minf=81 00:27:43.405 IO depths : 1=12.7%, 2=87.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:43.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.405 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.405 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:43.405 filename0: (groupid=0, jobs=1): err= 0: pid=302272: Sat Apr 27 02:47:15 2024 00:27:43.405 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(58.5MiB/5004msec) 00:27:43.405 slat (nsec): min=5353, max=30323, avg=8155.84, stdev=1553.27 00:27:43.405 clat (usec): min=7661, max=98463, avg=32062.14, stdev=24324.94 00:27:43.405 lat (usec): min=7669, max=98471, avg=32070.29, stdev=24324.99 00:27:43.405 clat percentiles (usec): 00:27:43.405 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10945], 00:27:43.405 | 30.00th=[11863], 40.00th=[13173], 50.00th=[14484], 60.00th=[52167], 00:27:43.405 | 70.00th=[53740], 80.00th=[54789], 90.00th=[56361], 95.00th=[58459], 00:27:43.405 | 99.00th=[98042], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:27:43.405 | 99.99th=[98042] 00:27:43.405 bw ( KiB/s): min= 6925, max=15616, per=28.72%, avg=11905.30, stdev=2773.69, samples=10 00:27:43.405 iops : min= 54, max= 122, avg=93.00, stdev=21.69, samples=10 00:27:43.405 lat (msec) : 10=10.26%, 20=46.15%, 100=43.59% 00:27:43.405 cpu : usr=96.42%, sys=3.26%, ctx=12, majf=0, minf=108 00:27:43.405 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:43.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.405 issued rwts: total=468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.405 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:43.405 00:27:43.405 Run status group 0 (all jobs): 00:27:43.405 READ: bw=40.5MiB/s (42.4MB/s), 11.7MiB/s-16.2MiB/s (12.3MB/s-17.0MB/s), io=203MiB (213MB), run=5004-5021msec 00:27:43.405 02:47:15 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:43.405 02:47:15 -- target/dif.sh@43 -- # local sub 00:27:43.405 02:47:15 -- target/dif.sh@45 -- # for sub in "$@" 00:27:43.405 02:47:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:43.405 02:47:15 -- target/dif.sh@36 -- # local sub_id=0 00:27:43.405 02:47:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:43.405 02:47:15 -- target/dif.sh@109 -- # bs=4k 00:27:43.405 02:47:15 -- target/dif.sh@109 -- # numjobs=8 00:27:43.405 02:47:15 -- target/dif.sh@109 -- # iodepth=16 00:27:43.405 02:47:15 -- target/dif.sh@109 -- # runtime= 00:27:43.405 02:47:15 -- target/dif.sh@109 -- # files=2 00:27:43.405 02:47:15 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:43.405 02:47:15 -- target/dif.sh@28 -- # local sub 00:27:43.405 02:47:15 -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.405 02:47:15 -- target/dif.sh@31 -- # create_subsystem 0 00:27:43.405 02:47:15 -- target/dif.sh@18 -- # local sub_id=0 00:27:43.405 02:47:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 bdev_null0 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 [2024-04-27 02:47:15.927172] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.405 02:47:15 -- target/dif.sh@31 -- # create_subsystem 1 00:27:43.405 02:47:15 -- target/dif.sh@18 -- # local sub_id=1 00:27:43.405 02:47:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 bdev_null1 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.405 02:47:15 -- target/dif.sh@31 -- # create_subsystem 2 00:27:43.405 02:47:15 -- target/dif.sh@18 -- # local sub_id=2 00:27:43.405 02:47:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 bdev_null2 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:43.405 02:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.405 02:47:15 -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 02:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.405 02:47:15 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:43.405 02:47:15 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:43.405 02:47:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:43.405 02:47:15 -- nvmf/common.sh@521 -- # config=() 00:27:43.405 02:47:15 -- nvmf/common.sh@521 -- # local subsystem config 00:27:43.405 02:47:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:43.405 02:47:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:43.405 { 00:27:43.405 "params": { 00:27:43.405 "name": "Nvme$subsystem", 00:27:43.405 "trtype": "$TEST_TRANSPORT", 00:27:43.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.405 "adrfam": "ipv4", 00:27:43.405 "trsvcid": "$NVMF_PORT", 00:27:43.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.405 "hdgst": ${hdgst:-false}, 00:27:43.405 "ddgst": ${ddgst:-false} 00:27:43.405 }, 00:27:43.405 "method": "bdev_nvme_attach_controller" 00:27:43.405 } 00:27:43.405 EOF 00:27:43.405 )") 00:27:43.405 02:47:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.405 02:47:15 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.405 02:47:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:43.405 02:47:15 -- target/dif.sh@82 -- # gen_fio_conf 00:27:43.405 02:47:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:43.405 02:47:15 -- target/dif.sh@54 -- # local file 00:27:43.405 02:47:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:43.405 02:47:15 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.405 02:47:15 -- target/dif.sh@56 -- # cat 00:27:43.405 02:47:15 -- common/autotest_common.sh@1327 -- # shift 00:27:43.405 02:47:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:43.405 02:47:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.405 02:47:15 -- nvmf/common.sh@543 -- # cat 00:27:43.405 02:47:15 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.405 02:47:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:43.405 02:47:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:43.405 02:47:15 -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.405 02:47:15 -- target/dif.sh@73 -- # cat 00:27:43.405 02:47:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:43.405 02:47:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:43.405 02:47:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:43.405 { 00:27:43.405 "params": { 00:27:43.405 "name": "Nvme$subsystem", 00:27:43.405 "trtype": "$TEST_TRANSPORT", 00:27:43.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.405 "adrfam": "ipv4", 00:27:43.405 "trsvcid": "$NVMF_PORT", 00:27:43.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.405 "hdgst": ${hdgst:-false}, 00:27:43.405 "ddgst": ${ddgst:-false} 00:27:43.405 }, 00:27:43.405 "method": "bdev_nvme_attach_controller" 00:27:43.405 } 00:27:43.405 EOF 00:27:43.405 )") 00:27:43.405 02:47:16 -- target/dif.sh@72 -- # (( file++ )) 00:27:43.405 02:47:16 -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.405 02:47:16 -- target/dif.sh@73 -- # cat 00:27:43.405 02:47:16 -- nvmf/common.sh@543 -- # cat 00:27:43.405 02:47:16 -- target/dif.sh@72 -- # (( file++ )) 00:27:43.405 02:47:16 -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.405 02:47:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:43.405 02:47:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:43.405 { 00:27:43.405 "params": { 00:27:43.405 "name": "Nvme$subsystem", 00:27:43.405 "trtype": "$TEST_TRANSPORT", 00:27:43.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.405 "adrfam": "ipv4", 00:27:43.405 "trsvcid": "$NVMF_PORT", 00:27:43.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.406 "hdgst": ${hdgst:-false}, 00:27:43.406 "ddgst": ${ddgst:-false} 00:27:43.406 }, 00:27:43.406 "method": "bdev_nvme_attach_controller" 00:27:43.406 } 00:27:43.406 EOF 00:27:43.406 )") 00:27:43.406 02:47:16 -- nvmf/common.sh@543 -- # cat 00:27:43.406 02:47:16 -- nvmf/common.sh@545 -- # jq . 00:27:43.406 02:47:16 -- nvmf/common.sh@546 -- # IFS=, 00:27:43.406 02:47:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:43.406 "params": { 00:27:43.406 "name": "Nvme0", 00:27:43.406 "trtype": "tcp", 00:27:43.406 "traddr": "10.0.0.2", 00:27:43.406 "adrfam": "ipv4", 00:27:43.406 "trsvcid": "4420", 00:27:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.406 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.406 "hdgst": false, 00:27:43.406 "ddgst": false 00:27:43.406 }, 00:27:43.406 "method": "bdev_nvme_attach_controller" 00:27:43.406 },{ 00:27:43.406 "params": { 00:27:43.406 "name": "Nvme1", 00:27:43.406 "trtype": "tcp", 00:27:43.406 "traddr": "10.0.0.2", 00:27:43.406 "adrfam": "ipv4", 00:27:43.406 "trsvcid": "4420", 00:27:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:43.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:43.406 "hdgst": false, 00:27:43.406 "ddgst": false 00:27:43.406 }, 00:27:43.406 "method": "bdev_nvme_attach_controller" 00:27:43.406 },{ 00:27:43.406 "params": { 00:27:43.406 "name": "Nvme2", 00:27:43.406 "trtype": "tcp", 00:27:43.406 "traddr": "10.0.0.2", 00:27:43.406 "adrfam": "ipv4", 00:27:43.406 "trsvcid": "4420", 00:27:43.406 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:43.406 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:43.406 "hdgst": false, 00:27:43.406 "ddgst": false 00:27:43.406 }, 00:27:43.406 "method": "bdev_nvme_attach_controller" 00:27:43.406 }' 00:27:43.406 02:47:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:43.406 02:47:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:43.406 02:47:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.406 02:47:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.406 02:47:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:43.406 02:47:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:43.406 02:47:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:43.406 02:47:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:43.406 02:47:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:43.406 02:47:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.406 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:43.406 ... 00:27:43.406 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:43.406 ... 00:27:43.406 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:43.406 ... 00:27:43.406 fio-3.35 00:27:43.406 Starting 24 threads 00:27:43.406 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.642 00:27:55.642 filename0: (groupid=0, jobs=1): err= 0: pid=303625: Sat Apr 27 02:47:27 2024 00:27:55.642 read: IOPS=565, BW=2260KiB/s (2314kB/s)(22.1MiB/10019msec) 00:27:55.642 slat (usec): min=2, max=185, avg= 7.84, stdev= 5.65 00:27:55.642 clat (usec): min=4028, max=62063, avg=28253.74, stdev=7374.76 00:27:55.642 lat (usec): min=4034, max=62073, avg=28261.58, stdev=7375.08 00:27:55.642 clat percentiles (usec): 00:27:55.642 | 1.00th=[ 6783], 5.00th=[13435], 10.00th=[18744], 20.00th=[21890], 00:27:55.642 | 30.00th=[26346], 40.00th=[30278], 50.00th=[31327], 60.00th=[31851], 00:27:55.642 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:27:55.642 | 99.00th=[52167], 99.50th=[54789], 99.90th=[58459], 99.95th=[59507], 00:27:55.642 | 99.99th=[62129] 00:27:55.642 bw ( KiB/s): min= 2048, max= 2720, per=4.83%, avg=2258.00, stdev=168.33, samples=20 00:27:55.642 iops : min= 512, max= 680, avg=564.50, stdev=42.08, samples=20 00:27:55.642 lat (msec) : 10=3.46%, 20=10.99%, 50=84.47%, 100=1.08% 00:27:55.642 cpu : usr=98.36%, sys=0.91%, ctx=63, majf=0, minf=42 00:27:55.642 IO depths : 1=4.0%, 2=8.2%, 4=19.1%, 8=59.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:55.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.642 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.642 issued rwts: total=5661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.642 filename0: (groupid=0, jobs=1): err= 0: pid=303626: Sat Apr 27 02:47:27 2024 00:27:55.642 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.1MiB/10023msec) 00:27:55.642 slat (usec): min=4, max=124, avg=17.87, stdev=15.66 00:27:55.642 clat (usec): min=4562, max=57172, avg=32752.81, stdev=6202.04 00:27:55.642 lat (usec): min=4571, max=57219, avg=32770.68, stdev=6203.29 00:27:55.642 clat percentiles (usec): 00:27:55.642 | 1.00th=[ 6980], 5.00th=[23725], 10.00th=[28967], 20.00th=[31327], 00:27:55.642 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32637], 60.00th=[32900], 00:27:55.642 | 70.00th=[33162], 80.00th=[33817], 90.00th=[39584], 95.00th=[42730], 00:27:55.642 | 99.00th=[54264], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:27:55.642 | 99.99th=[57410] 00:27:55.642 bw ( KiB/s): min= 1792, max= 2432, per=4.16%, avg=1944.40, stdev=135.75, samples=20 00:27:55.642 iops : min= 448, max= 608, avg=486.10, stdev=33.94, samples=20 00:27:55.642 lat (msec) : 10=1.56%, 20=1.19%, 50=95.16%, 100=2.09% 00:27:55.642 cpu : usr=98.96%, sys=0.71%, ctx=13, majf=0, minf=20 00:27:55.642 IO depths : 1=1.9%, 2=3.9%, 4=11.7%, 8=70.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:55.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.642 complete : 0=0.0%, 4=91.0%, 8=4.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.642 issued rwts: total=4877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.642 filename0: (groupid=0, jobs=1): err= 0: pid=303627: Sat Apr 27 02:47:27 2024 00:27:55.642 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10023msec) 00:27:55.642 slat (usec): min=5, max=136, avg=22.24, stdev=20.65 00:27:55.642 clat (usec): min=14870, max=53543, avg=31759.06, stdev=4330.86 00:27:55.642 lat (usec): min=14878, max=53550, avg=31781.30, stdev=4332.45 00:27:55.642 clat percentiles (usec): 00:27:55.642 | 1.00th=[19530], 5.00th=[22152], 10.00th=[27657], 20.00th=[31065], 00:27:55.642 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:27:55.642 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:27:55.642 | 99.00th=[47449], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:27:55.642 | 99.99th=[53740] 00:27:55.642 bw ( KiB/s): min= 1792, max= 2272, per=4.28%, avg=2001.60, stdev=112.41, samples=20 00:27:55.642 iops : min= 448, max= 568, avg=500.40, stdev=28.10, samples=20 00:27:55.642 lat (msec) : 20=1.69%, 50=97.47%, 100=0.84% 00:27:55.642 cpu : usr=98.88%, sys=0.66%, ctx=39, majf=0, minf=25 00:27:55.642 IO depths : 1=4.3%, 2=9.2%, 4=20.9%, 8=57.1%, 16=8.5%, 32=0.0%, >=64=0.0% 00:27:55.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.642 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.642 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.642 filename0: (groupid=0, jobs=1): err= 0: pid=303628: Sat Apr 27 02:47:27 2024 00:27:55.642 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10020msec) 00:27:55.642 slat (usec): min=5, max=212, avg=11.44, stdev= 9.63 00:27:55.642 clat (usec): min=5335, max=55694, avg=33206.51, stdev=6242.35 00:27:55.642 lat (usec): min=5351, max=55713, avg=33217.95, stdev=6242.72 00:27:55.642 clat percentiles (usec): 00:27:55.642 | 1.00th=[17171], 5.00th=[22938], 10.00th=[26870], 20.00th=[31065], 00:27:55.642 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:27:55.642 | 70.00th=[33424], 80.00th=[34866], 90.00th=[42206], 95.00th=[45351], 00:27:55.642 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[55837], 00:27:55.642 | 99.99th=[55837] 00:27:55.643 bw ( KiB/s): min= 1768, max= 2000, per=4.11%, avg=1922.00, stdev=52.80, samples=20 00:27:55.643 iops : min= 442, max= 500, avg=480.50, stdev=13.20, samples=20 00:27:55.643 lat (msec) : 10=0.33%, 20=2.16%, 50=96.24%, 100=1.27% 00:27:55.643 cpu : usr=95.82%, sys=2.29%, ctx=109, majf=0, minf=36 00:27:55.643 IO depths : 1=0.9%, 2=1.9%, 4=10.1%, 8=74.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=90.5%, 8=5.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename0: (groupid=0, jobs=1): err= 0: pid=303629: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10003msec) 00:27:55.643 slat (usec): min=5, max=151, avg=20.79, stdev=19.20 00:27:55.643 clat (usec): min=10370, max=57068, avg=32193.27, stdev=2365.23 00:27:55.643 lat (usec): min=10376, max=57084, avg=32214.06, stdev=2365.77 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[27919], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:27:55.643 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:27:55.643 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:27:55.643 | 99.00th=[34866], 99.50th=[35914], 99.90th=[56886], 99.95th=[56886], 00:27:55.643 | 99.99th=[56886] 00:27:55.643 bw ( KiB/s): min= 1795, max= 2048, per=4.21%, avg=1967.32, stdev=76.07, samples=19 00:27:55.643 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:27:55.643 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:27:55.643 cpu : usr=98.42%, sys=0.90%, ctx=29, majf=0, minf=33 00:27:55.643 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename0: (groupid=0, jobs=1): err= 0: pid=303630: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10015msec) 00:27:55.643 slat (nsec): min=5521, max=93947, avg=11898.09, stdev=8855.11 00:27:55.643 clat (usec): min=14068, max=58483, avg=32008.47, stdev=3244.71 00:27:55.643 lat (usec): min=14081, max=58489, avg=32020.37, stdev=3245.33 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[17957], 5.00th=[29230], 10.00th=[30540], 20.00th=[31327], 00:27:55.643 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.643 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:27:55.643 | 99.00th=[40633], 99.50th=[46400], 99.90th=[53216], 99.95th=[57410], 00:27:55.643 | 99.99th=[58459] 00:27:55.643 bw ( KiB/s): min= 1792, max= 2064, per=4.25%, avg=1989.60, stdev=75.81, samples=20 00:27:55.643 iops : min= 448, max= 516, avg=497.40, stdev=18.95, samples=20 00:27:55.643 lat (msec) : 20=1.70%, 50=97.86%, 100=0.44% 00:27:55.643 cpu : usr=99.34%, sys=0.36%, ctx=16, majf=0, minf=29 00:27:55.643 IO depths : 1=4.1%, 2=10.0%, 4=24.4%, 8=53.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename0: (groupid=0, jobs=1): err= 0: pid=303631: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10020msec) 00:27:55.643 slat (usec): min=5, max=110, avg=19.04, stdev=15.45 00:27:55.643 clat (usec): min=17717, max=57387, avg=32684.97, stdev=4136.93 00:27:55.643 lat (usec): min=17723, max=57393, avg=32704.01, stdev=4136.81 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[20317], 5.00th=[26346], 10.00th=[30278], 20.00th=[31327], 00:27:55.643 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.643 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[41681], 00:27:55.643 | 99.00th=[47973], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:27:55.643 | 99.99th=[57410] 00:27:55.643 bw ( KiB/s): min= 1792, max= 2112, per=4.16%, avg=1946.40, stdev=77.39, samples=20 00:27:55.643 iops : min= 448, max= 528, avg=486.60, stdev=19.35, samples=20 00:27:55.643 lat (msec) : 20=0.88%, 50=98.38%, 100=0.74% 00:27:55.643 cpu : usr=98.91%, sys=0.77%, ctx=24, majf=0, minf=35 00:27:55.643 IO depths : 1=3.9%, 2=7.8%, 4=18.3%, 8=60.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=92.5%, 8=2.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename0: (groupid=0, jobs=1): err= 0: pid=303632: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:27:55.643 slat (usec): min=5, max=114, avg=22.42, stdev=19.91 00:27:55.643 clat (usec): min=17034, max=47185, avg=32162.26, stdev=1290.42 00:27:55.643 lat (usec): min=17041, max=47192, avg=32184.68, stdev=1292.63 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[28967], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:27:55.643 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:55.643 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:27:55.643 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38536], 99.95th=[43779], 00:27:55.643 | 99.99th=[46924] 00:27:55.643 bw ( KiB/s): min= 1920, max= 2048, per=4.22%, avg=1973.89, stdev=62.02, samples=19 00:27:55.643 iops : min= 480, max= 512, avg=493.47, stdev=15.50, samples=19 00:27:55.643 lat (msec) : 20=0.08%, 50=99.92% 00:27:55.643 cpu : usr=98.37%, sys=0.81%, ctx=38, majf=0, minf=28 00:27:55.643 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename1: (groupid=0, jobs=1): err= 0: pid=303633: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10023msec) 00:27:55.643 slat (usec): min=5, max=246, avg=15.96, stdev=15.58 00:27:55.643 clat (usec): min=13033, max=86241, avg=34499.53, stdev=6474.09 00:27:55.643 lat (usec): min=13044, max=86283, avg=34515.50, stdev=6473.93 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[20055], 5.00th=[26084], 10.00th=[30278], 20.00th=[31589], 00:27:55.643 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:27:55.643 | 70.00th=[33817], 80.00th=[38536], 90.00th=[43254], 95.00th=[47973], 00:27:55.643 | 99.00th=[54264], 99.50th=[61080], 99.90th=[65799], 99.95th=[65799], 00:27:55.643 | 99.99th=[86508] 00:27:55.643 bw ( KiB/s): min= 1712, max= 2032, per=3.95%, avg=1846.40, stdev=79.14, samples=20 00:27:55.643 iops : min= 428, max= 508, avg=461.60, stdev=19.78, samples=20 00:27:55.643 lat (msec) : 20=0.95%, 50=96.14%, 100=2.91% 00:27:55.643 cpu : usr=97.84%, sys=1.36%, ctx=35, majf=0, minf=32 00:27:55.643 IO depths : 1=1.4%, 2=3.0%, 4=11.2%, 8=71.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=90.9%, 8=5.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename1: (groupid=0, jobs=1): err= 0: pid=303634: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10015msec) 00:27:55.643 slat (usec): min=5, max=116, avg=15.08, stdev=15.22 00:27:55.643 clat (usec): min=11002, max=63407, avg=32566.43, stdev=5539.57 00:27:55.643 lat (usec): min=11008, max=63434, avg=32581.52, stdev=5540.35 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[18744], 5.00th=[22676], 10.00th=[27132], 20.00th=[31327], 00:27:55.643 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.643 | 70.00th=[33162], 80.00th=[33424], 90.00th=[36963], 95.00th=[41681], 00:27:55.643 | 99.00th=[53216], 99.50th=[57934], 99.90th=[63177], 99.95th=[63177], 00:27:55.643 | 99.99th=[63177] 00:27:55.643 bw ( KiB/s): min= 1664, max= 2224, per=4.19%, avg=1958.40, stdev=115.77, samples=20 00:27:55.643 iops : min= 416, max= 556, avg=489.60, stdev=28.94, samples=20 00:27:55.643 lat (msec) : 20=1.82%, 50=96.31%, 100=1.88% 00:27:55.643 cpu : usr=99.08%, sys=0.54%, ctx=112, majf=0, minf=27 00:27:55.643 IO depths : 1=1.4%, 2=4.7%, 4=15.7%, 8=66.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=92.1%, 8=3.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename1: (groupid=0, jobs=1): err= 0: pid=303635: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=464, BW=1859KiB/s (1904kB/s)(18.2MiB/10020msec) 00:27:55.643 slat (usec): min=5, max=108, avg=15.59, stdev=13.56 00:27:55.643 clat (usec): min=8623, max=64830, avg=34313.54, stdev=6733.71 00:27:55.643 lat (usec): min=8633, max=64838, avg=34329.12, stdev=6733.30 00:27:55.643 clat percentiles (usec): 00:27:55.643 | 1.00th=[19530], 5.00th=[25297], 10.00th=[30278], 20.00th=[31327], 00:27:55.643 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:27:55.643 | 70.00th=[33424], 80.00th=[36963], 90.00th=[44303], 95.00th=[49021], 00:27:55.643 | 99.00th=[57410], 99.50th=[57410], 99.90th=[63177], 99.95th=[63177], 00:27:55.643 | 99.99th=[64750] 00:27:55.643 bw ( KiB/s): min= 1664, max= 1976, per=3.97%, avg=1856.80, stdev=80.79, samples=20 00:27:55.643 iops : min= 416, max= 494, avg=464.20, stdev=20.20, samples=20 00:27:55.643 lat (msec) : 10=0.04%, 20=1.22%, 50=94.03%, 100=4.70% 00:27:55.643 cpu : usr=98.85%, sys=0.78%, ctx=172, majf=0, minf=30 00:27:55.643 IO depths : 1=0.5%, 2=0.9%, 4=7.1%, 8=76.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:27:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 complete : 0=0.0%, 4=90.2%, 8=7.0%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.643 issued rwts: total=4658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.643 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.643 filename1: (groupid=0, jobs=1): err= 0: pid=303636: Sat Apr 27 02:47:27 2024 00:27:55.643 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10009msec) 00:27:55.643 slat (nsec): min=5523, max=71316, avg=11725.27, stdev=7751.12 00:27:55.643 clat (usec): min=9082, max=52879, avg=31981.91, stdev=3109.23 00:27:55.643 lat (usec): min=9089, max=52898, avg=31993.63, stdev=3109.90 00:27:55.643 clat percentiles (usec): 00:27:55.644 | 1.00th=[19006], 5.00th=[29754], 10.00th=[30540], 20.00th=[31327], 00:27:55.644 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.644 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:27:55.644 | 99.00th=[40633], 99.50th=[45351], 99.90th=[52691], 99.95th=[52691], 00:27:55.644 | 99.99th=[52691] 00:27:55.644 bw ( KiB/s): min= 1792, max= 2176, per=4.26%, avg=1990.40, stdev=106.82, samples=20 00:27:55.644 iops : min= 448, max= 544, avg=497.60, stdev=26.70, samples=20 00:27:55.644 lat (msec) : 10=0.32%, 20=1.28%, 50=98.08%, 100=0.32% 00:27:55.644 cpu : usr=99.25%, sys=0.47%, ctx=10, majf=0, minf=32 00:27:55.644 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename1: (groupid=0, jobs=1): err= 0: pid=303637: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10001msec) 00:27:55.644 slat (usec): min=5, max=142, avg=25.50, stdev=20.26 00:27:55.644 clat (usec): min=12930, max=56600, avg=32293.84, stdev=2985.06 00:27:55.644 lat (usec): min=12938, max=56618, avg=32319.34, stdev=2985.86 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[22676], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:27:55.644 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.644 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:27:55.644 | 99.00th=[49546], 99.50th=[52167], 99.90th=[55313], 99.95th=[55837], 00:27:55.644 | 99.99th=[56361] 00:27:55.644 bw ( KiB/s): min= 1795, max= 2048, per=4.19%, avg=1961.42, stdev=68.37, samples=19 00:27:55.644 iops : min= 448, max= 512, avg=490.32, stdev=17.20, samples=19 00:27:55.644 lat (msec) : 20=0.59%, 50=98.68%, 100=0.73% 00:27:55.644 cpu : usr=98.25%, sys=0.94%, ctx=23, majf=0, minf=33 00:27:55.644 IO depths : 1=4.1%, 2=10.1%, 4=24.4%, 8=53.0%, 16=8.4%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename1: (groupid=0, jobs=1): err= 0: pid=303638: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=461, BW=1846KiB/s (1891kB/s)(18.0MiB/10009msec) 00:27:55.644 slat (usec): min=5, max=109, avg=19.15, stdev=15.78 00:27:55.644 clat (usec): min=10261, max=64337, avg=34520.43, stdev=6201.03 00:27:55.644 lat (usec): min=10272, max=64343, avg=34539.58, stdev=6199.06 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[18744], 5.00th=[28181], 10.00th=[30540], 20.00th=[31589], 00:27:55.644 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:27:55.644 | 70.00th=[33817], 80.00th=[39060], 90.00th=[43254], 95.00th=[46924], 00:27:55.644 | 99.00th=[54264], 99.50th=[56886], 99.90th=[62129], 99.95th=[64226], 00:27:55.644 | 99.99th=[64226] 00:27:55.644 bw ( KiB/s): min= 1640, max= 1968, per=3.95%, avg=1846.89, stdev=85.66, samples=19 00:27:55.644 iops : min= 410, max= 492, avg=461.68, stdev=21.38, samples=19 00:27:55.644 lat (msec) : 20=1.36%, 50=96.19%, 100=2.45% 00:27:55.644 cpu : usr=98.99%, sys=0.63%, ctx=47, majf=0, minf=24 00:27:55.644 IO depths : 1=1.7%, 2=3.5%, 4=12.5%, 8=70.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=91.2%, 8=4.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename1: (groupid=0, jobs=1): err= 0: pid=303639: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:27:55.644 slat (usec): min=5, max=127, avg=27.47, stdev=20.69 00:27:55.644 clat (usec): min=5394, max=55625, avg=32100.75, stdev=2295.75 00:27:55.644 lat (usec): min=5399, max=55645, avg=32128.22, stdev=2297.65 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[29754], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:27:55.644 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:27:55.644 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:27:55.644 | 99.00th=[34341], 99.50th=[34866], 99.90th=[55837], 99.95th=[55837], 00:27:55.644 | 99.99th=[55837] 00:27:55.644 bw ( KiB/s): min= 1795, max= 2048, per=4.21%, avg=1967.32, stdev=76.07, samples=19 00:27:55.644 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:27:55.644 lat (msec) : 10=0.04%, 20=0.61%, 50=99.03%, 100=0.32% 00:27:55.644 cpu : usr=99.18%, sys=0.46%, ctx=65, majf=0, minf=29 00:27:55.644 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename1: (groupid=0, jobs=1): err= 0: pid=303640: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10004msec) 00:27:55.644 slat (usec): min=5, max=122, avg=20.40, stdev=17.88 00:27:55.644 clat (usec): min=7558, max=57975, avg=31886.10, stdev=4911.59 00:27:55.644 lat (usec): min=7571, max=57991, avg=31906.51, stdev=4913.77 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[17695], 5.00th=[22414], 10.00th=[26346], 20.00th=[30802], 00:27:55.644 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32375], 60.00th=[32637], 00:27:55.644 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[40109], 00:27:55.644 | 99.00th=[50594], 99.50th=[52691], 99.90th=[55837], 99.95th=[57934], 00:27:55.644 | 99.99th=[57934] 00:27:55.644 bw ( KiB/s): min= 1776, max= 2224, per=4.27%, avg=1994.11, stdev=102.62, samples=19 00:27:55.644 iops : min= 444, max= 556, avg=498.53, stdev=25.65, samples=19 00:27:55.644 lat (msec) : 10=0.04%, 20=2.62%, 50=96.21%, 100=1.12% 00:27:55.644 cpu : usr=99.05%, sys=0.54%, ctx=122, majf=0, minf=20 00:27:55.644 IO depths : 1=3.3%, 2=7.8%, 4=19.4%, 8=59.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename2: (groupid=0, jobs=1): err= 0: pid=303641: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10020msec) 00:27:55.644 slat (usec): min=5, max=125, avg=22.05, stdev=19.57 00:27:55.644 clat (usec): min=9299, max=64239, avg=34278.80, stdev=5690.95 00:27:55.644 lat (usec): min=9320, max=64245, avg=34300.84, stdev=5688.41 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[21627], 5.00th=[29230], 10.00th=[30802], 20.00th=[31589], 00:27:55.644 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:27:55.644 | 70.00th=[33424], 80.00th=[36439], 90.00th=[42730], 95.00th=[46924], 00:27:55.644 | 99.00th=[54789], 99.50th=[55313], 99.90th=[64226], 99.95th=[64226], 00:27:55.644 | 99.99th=[64226] 00:27:55.644 bw ( KiB/s): min= 1664, max= 2048, per=3.97%, avg=1855.60, stdev=98.99, samples=20 00:27:55.644 iops : min= 416, max= 512, avg=463.90, stdev=24.75, samples=20 00:27:55.644 lat (msec) : 10=0.02%, 20=0.60%, 50=96.58%, 100=2.79% 00:27:55.644 cpu : usr=99.15%, sys=0.53%, ctx=18, majf=0, minf=25 00:27:55.644 IO depths : 1=2.4%, 2=4.9%, 4=14.0%, 8=66.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=91.7%, 8=4.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename2: (groupid=0, jobs=1): err= 0: pid=303642: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10010msec) 00:27:55.644 slat (usec): min=5, max=121, avg=22.54, stdev=19.90 00:27:55.644 clat (usec): min=10051, max=61640, avg=33245.65, stdev=5415.47 00:27:55.644 lat (usec): min=10058, max=61649, avg=33268.19, stdev=5414.59 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[17695], 5.00th=[26346], 10.00th=[30278], 20.00th=[31327], 00:27:55.644 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.644 | 70.00th=[33162], 80.00th=[33817], 90.00th=[40633], 95.00th=[44827], 00:27:55.644 | 99.00th=[51643], 99.50th=[52691], 99.90th=[60556], 99.95th=[60556], 00:27:55.644 | 99.99th=[61604] 00:27:55.644 bw ( KiB/s): min= 1816, max= 2048, per=4.08%, avg=1909.05, stdev=52.88, samples=19 00:27:55.644 iops : min= 454, max= 512, avg=477.26, stdev=13.22, samples=19 00:27:55.644 lat (msec) : 20=1.54%, 50=96.43%, 100=2.02% 00:27:55.644 cpu : usr=97.85%, sys=1.18%, ctx=25, majf=0, minf=26 00:27:55.644 IO depths : 1=2.3%, 2=4.5%, 4=13.2%, 8=67.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=91.4%, 8=4.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.644 filename2: (groupid=0, jobs=1): err= 0: pid=303643: Sat Apr 27 02:47:27 2024 00:27:55.644 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10010msec) 00:27:55.644 slat (usec): min=5, max=134, avg=22.25, stdev=18.45 00:27:55.644 clat (usec): min=13201, max=61791, avg=32849.78, stdev=3977.80 00:27:55.644 lat (usec): min=13208, max=61824, avg=32872.03, stdev=3977.78 00:27:55.644 clat percentiles (usec): 00:27:55.644 | 1.00th=[22414], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:27:55.644 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.644 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[40109], 00:27:55.644 | 99.00th=[51119], 99.50th=[52691], 99.90th=[61604], 99.95th=[61604], 00:27:55.644 | 99.99th=[61604] 00:27:55.644 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1933.20, stdev=75.64, samples=20 00:27:55.644 iops : min= 448, max= 512, avg=483.30, stdev=18.91, samples=20 00:27:55.644 lat (msec) : 20=0.31%, 50=98.27%, 100=1.42% 00:27:55.644 cpu : usr=98.91%, sys=0.69%, ctx=45, majf=0, minf=33 00:27:55.644 IO depths : 1=4.6%, 2=9.3%, 4=20.3%, 8=57.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:27:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.644 issued rwts: total=4849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.644 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.645 filename2: (groupid=0, jobs=1): err= 0: pid=303644: Sat Apr 27 02:47:27 2024 00:27:55.645 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:27:55.645 slat (usec): min=5, max=117, avg=27.45, stdev=21.17 00:27:55.645 clat (usec): min=10327, max=55675, avg=32091.93, stdev=2300.42 00:27:55.645 lat (usec): min=10335, max=55714, avg=32119.38, stdev=2302.67 00:27:55.645 clat percentiles (usec): 00:27:55.645 | 1.00th=[27919], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:27:55.645 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:27:55.645 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:27:55.645 | 99.00th=[34341], 99.50th=[34866], 99.90th=[55837], 99.95th=[55837], 00:27:55.645 | 99.99th=[55837] 00:27:55.645 bw ( KiB/s): min= 1795, max= 2048, per=4.21%, avg=1967.32, stdev=76.07, samples=19 00:27:55.645 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:27:55.645 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:27:55.645 cpu : usr=98.78%, sys=0.73%, ctx=205, majf=0, minf=28 00:27:55.645 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:55.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.645 filename2: (groupid=0, jobs=1): err= 0: pid=303645: Sat Apr 27 02:47:27 2024 00:27:55.645 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10004msec) 00:27:55.645 slat (usec): min=5, max=137, avg=18.91, stdev=17.75 00:27:55.645 clat (usec): min=11405, max=59980, avg=34134.42, stdev=5867.40 00:27:55.645 lat (usec): min=11415, max=59986, avg=34153.33, stdev=5865.64 00:27:55.645 clat percentiles (usec): 00:27:55.645 | 1.00th=[18482], 5.00th=[28967], 10.00th=[30540], 20.00th=[31589], 00:27:55.645 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:27:55.645 | 70.00th=[33424], 80.00th=[36439], 90.00th=[42730], 95.00th=[46400], 00:27:55.645 | 99.00th=[52691], 99.50th=[53216], 99.90th=[56361], 99.95th=[60031], 00:27:55.645 | 99.99th=[60031] 00:27:55.645 bw ( KiB/s): min= 1616, max= 2048, per=3.98%, avg=1862.32, stdev=94.94, samples=19 00:27:55.645 iops : min= 404, max= 512, avg=465.58, stdev=23.74, samples=19 00:27:55.645 lat (msec) : 20=1.71%, 50=95.57%, 100=2.72% 00:27:55.645 cpu : usr=98.92%, sys=0.70%, ctx=63, majf=0, minf=28 00:27:55.645 IO depths : 1=2.3%, 2=4.8%, 4=14.0%, 8=66.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:55.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 complete : 0=0.0%, 4=91.7%, 8=4.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 issued rwts: total=4670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.645 filename2: (groupid=0, jobs=1): err= 0: pid=303646: Sat Apr 27 02:47:27 2024 00:27:55.645 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10015msec) 00:27:55.645 slat (usec): min=5, max=167, avg=14.70, stdev=13.88 00:27:55.645 clat (usec): min=16065, max=55602, avg=32184.52, stdev=2140.19 00:27:55.645 lat (usec): min=16073, max=55622, avg=32199.23, stdev=2141.49 00:27:55.645 clat percentiles (usec): 00:27:55.645 | 1.00th=[24773], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:27:55.645 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.645 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:27:55.645 | 99.00th=[34866], 99.50th=[34866], 99.90th=[55313], 99.95th=[55313], 00:27:55.645 | 99.99th=[55837] 00:27:55.645 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1977.60, stdev=77.42, samples=20 00:27:55.645 iops : min= 448, max= 512, avg=494.40, stdev=19.35, samples=20 00:27:55.645 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:27:55.645 cpu : usr=98.21%, sys=0.92%, ctx=33, majf=0, minf=26 00:27:55.645 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:55.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.645 filename2: (groupid=0, jobs=1): err= 0: pid=303647: Sat Apr 27 02:47:27 2024 00:27:55.645 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.8MiB/10001msec) 00:27:55.645 slat (usec): min=5, max=128, avg=18.37, stdev=18.26 00:27:55.645 clat (usec): min=7074, max=63265, avg=34965.36, stdev=6453.04 00:27:55.645 lat (usec): min=7083, max=63319, avg=34983.72, stdev=6452.45 00:27:55.645 clat percentiles (usec): 00:27:55.645 | 1.00th=[21103], 5.00th=[30016], 10.00th=[31065], 20.00th=[31851], 00:27:55.645 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:27:55.645 | 70.00th=[33817], 80.00th=[38011], 90.00th=[43779], 95.00th=[49546], 00:27:55.645 | 99.00th=[56361], 99.50th=[57934], 99.90th=[63177], 99.95th=[63177], 00:27:55.645 | 99.99th=[63177] 00:27:55.645 bw ( KiB/s): min= 1504, max= 2000, per=3.88%, avg=1813.89, stdev=120.47, samples=19 00:27:55.645 iops : min= 376, max= 500, avg=453.47, stdev=30.12, samples=19 00:27:55.645 lat (msec) : 10=0.02%, 20=0.72%, 50=94.52%, 100=4.73% 00:27:55.645 cpu : usr=98.74%, sys=0.87%, ctx=71, majf=0, minf=45 00:27:55.645 IO depths : 1=0.1%, 2=0.4%, 4=6.9%, 8=77.9%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:55.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 issued rwts: total=4564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.645 filename2: (groupid=0, jobs=1): err= 0: pid=303648: Sat Apr 27 02:47:27 2024 00:27:55.645 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10015msec) 00:27:55.645 slat (usec): min=5, max=123, avg=15.48, stdev=14.76 00:27:55.645 clat (usec): min=3540, max=59847, avg=32446.18, stdev=7349.66 00:27:55.645 lat (usec): min=3556, max=59855, avg=32461.66, stdev=7350.41 00:27:55.645 clat percentiles (usec): 00:27:55.645 | 1.00th=[ 6521], 5.00th=[19792], 10.00th=[27395], 20.00th=[31065], 00:27:55.645 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:27:55.645 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[44303], 00:27:55.645 | 99.00th=[50594], 99.50th=[53740], 99.90th=[60031], 99.95th=[60031], 00:27:55.645 | 99.99th=[60031] 00:27:55.645 bw ( KiB/s): min= 1664, max= 2496, per=4.19%, avg=1961.60, stdev=219.18, samples=20 00:27:55.645 iops : min= 416, max= 624, avg=490.40, stdev=54.79, samples=20 00:27:55.645 lat (msec) : 4=0.33%, 10=3.37%, 20=1.48%, 50=93.58%, 100=1.24% 00:27:55.645 cpu : usr=98.91%, sys=0.76%, ctx=12, majf=0, minf=31 00:27:55.645 IO depths : 1=2.9%, 2=5.9%, 4=15.4%, 8=65.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:55.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 complete : 0=0.0%, 4=91.7%, 8=3.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.645 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.645 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.645 00:27:55.645 Run status group 0 (all jobs): 00:27:55.645 READ: bw=45.7MiB/s (47.9MB/s), 1825KiB/s-2260KiB/s (1869kB/s-2314kB/s), io=458MiB (480MB), run=10001-10023msec 00:27:55.645 02:47:27 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:55.645 02:47:27 -- target/dif.sh@43 -- # local sub 00:27:55.645 02:47:27 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.645 02:47:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.645 02:47:27 -- target/dif.sh@36 -- # local sub_id=0 00:27:55.645 02:47:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.645 02:47:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:55.645 02:47:27 -- target/dif.sh@36 -- # local sub_id=1 00:27:55.645 02:47:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.645 02:47:27 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:55.645 02:47:27 -- target/dif.sh@36 -- # local sub_id=2 00:27:55.645 02:47:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:55.645 02:47:27 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:55.645 02:47:27 -- target/dif.sh@115 -- # numjobs=2 00:27:55.645 02:47:27 -- target/dif.sh@115 -- # iodepth=8 00:27:55.645 02:47:27 -- target/dif.sh@115 -- # runtime=5 00:27:55.645 02:47:27 -- target/dif.sh@115 -- # files=1 00:27:55.645 02:47:27 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:55.645 02:47:27 -- target/dif.sh@28 -- # local sub 00:27:55.645 02:47:27 -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.645 02:47:27 -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.645 02:47:27 -- target/dif.sh@18 -- # local sub_id=0 00:27:55.645 02:47:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 bdev_null0 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.645 02:47:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.645 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.645 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.646 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.646 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.646 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.646 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.646 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.646 [2024-04-27 02:47:27.554796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.646 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.646 02:47:27 -- target/dif.sh@31 -- # create_subsystem 1 00:27:55.646 02:47:27 -- target/dif.sh@18 -- # local sub_id=1 00:27:55.646 02:47:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:55.646 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.646 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.646 bdev_null1 00:27:55.646 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:55.646 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.646 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.646 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:55.646 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.646 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.646 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.646 02:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.646 02:47:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.646 02:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.646 02:47:27 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:55.646 02:47:27 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:55.646 02:47:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.646 02:47:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:55.646 02:47:27 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.646 02:47:27 -- nvmf/common.sh@521 -- # config=() 00:27:55.646 02:47:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:55.646 02:47:27 -- nvmf/common.sh@521 -- # local subsystem config 00:27:55.646 02:47:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.646 02:47:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:55.646 02:47:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:55.646 02:47:27 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.646 02:47:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:55.646 { 00:27:55.646 "params": { 00:27:55.646 "name": "Nvme$subsystem", 00:27:55.646 "trtype": "$TEST_TRANSPORT", 00:27:55.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.646 "adrfam": "ipv4", 00:27:55.646 "trsvcid": "$NVMF_PORT", 00:27:55.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.646 "hdgst": ${hdgst:-false}, 00:27:55.646 "ddgst": ${ddgst:-false} 00:27:55.646 }, 00:27:55.646 "method": "bdev_nvme_attach_controller" 00:27:55.646 } 00:27:55.646 EOF 00:27:55.646 )") 00:27:55.646 02:47:27 -- common/autotest_common.sh@1327 -- # shift 00:27:55.646 02:47:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:55.646 02:47:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.646 02:47:27 -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.646 02:47:27 -- target/dif.sh@54 -- # local file 00:27:55.646 02:47:27 -- target/dif.sh@56 -- # cat 00:27:55.646 02:47:27 -- nvmf/common.sh@543 -- # cat 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:55.646 02:47:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.646 02:47:27 -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.646 02:47:27 -- target/dif.sh@73 -- # cat 00:27:55.646 02:47:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:55.646 02:47:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:55.646 { 00:27:55.646 "params": { 00:27:55.646 "name": "Nvme$subsystem", 00:27:55.646 "trtype": "$TEST_TRANSPORT", 00:27:55.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.646 "adrfam": "ipv4", 00:27:55.646 "trsvcid": "$NVMF_PORT", 00:27:55.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.646 "hdgst": ${hdgst:-false}, 00:27:55.646 "ddgst": ${ddgst:-false} 00:27:55.646 }, 00:27:55.646 "method": "bdev_nvme_attach_controller" 00:27:55.646 } 00:27:55.646 EOF 00:27:55.646 )") 00:27:55.646 02:47:27 -- nvmf/common.sh@543 -- # cat 00:27:55.646 02:47:27 -- target/dif.sh@72 -- # (( file++ )) 00:27:55.646 02:47:27 -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.646 02:47:27 -- nvmf/common.sh@545 -- # jq . 00:27:55.646 02:47:27 -- nvmf/common.sh@546 -- # IFS=, 00:27:55.646 02:47:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:55.646 "params": { 00:27:55.646 "name": "Nvme0", 00:27:55.646 "trtype": "tcp", 00:27:55.646 "traddr": "10.0.0.2", 00:27:55.646 "adrfam": "ipv4", 00:27:55.646 "trsvcid": "4420", 00:27:55.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.646 "hdgst": false, 00:27:55.646 "ddgst": false 00:27:55.646 }, 00:27:55.646 "method": "bdev_nvme_attach_controller" 00:27:55.646 },{ 00:27:55.646 "params": { 00:27:55.646 "name": "Nvme1", 00:27:55.646 "trtype": "tcp", 00:27:55.646 "traddr": "10.0.0.2", 00:27:55.646 "adrfam": "ipv4", 00:27:55.646 "trsvcid": "4420", 00:27:55.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.646 "hdgst": false, 00:27:55.646 "ddgst": false 00:27:55.646 }, 00:27:55.646 "method": "bdev_nvme_attach_controller" 00:27:55.646 }' 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:55.646 02:47:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:55.646 02:47:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:55.646 02:47:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:55.646 02:47:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:55.646 02:47:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:55.646 02:47:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.646 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:55.646 ... 00:27:55.646 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:55.646 ... 00:27:55.646 fio-3.35 00:27:55.646 Starting 4 threads 00:27:55.646 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.936 00:28:00.936 filename0: (groupid=0, jobs=1): err= 0: pid=306059: Sat Apr 27 02:47:33 2024 00:28:00.936 read: IOPS=2459, BW=19.2MiB/s (20.1MB/s)(96.1MiB/5003msec) 00:28:00.936 slat (nsec): min=5333, max=42221, avg=5931.94, stdev=1672.35 00:28:00.936 clat (usec): min=1183, max=8106, avg=3235.38, stdev=644.14 00:28:00.936 lat (usec): min=1194, max=8137, avg=3241.31, stdev=644.17 00:28:00.936 clat percentiles (usec): 00:28:00.936 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2737], 00:28:00.936 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3195], 60.00th=[ 3326], 00:28:00.936 | 70.00th=[ 3458], 80.00th=[ 3687], 90.00th=[ 4047], 95.00th=[ 4424], 00:28:00.936 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5932], 99.95th=[ 8029], 00:28:00.936 | 99.99th=[ 8094] 00:28:00.937 bw ( KiB/s): min=19296, max=20112, per=30.40%, avg=19681.60, stdev=234.84, samples=10 00:28:00.937 iops : min= 2412, max= 2514, avg=2460.20, stdev=29.36, samples=10 00:28:00.937 lat (msec) : 2=1.27%, 4=87.74%, 10=10.99% 00:28:00.937 cpu : usr=96.92%, sys=2.80%, ctx=7, majf=0, minf=67 00:28:00.937 IO depths : 1=0.3%, 2=1.1%, 4=71.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 issued rwts: total=12306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:00.937 filename0: (groupid=0, jobs=1): err= 0: pid=306060: Sat Apr 27 02:47:33 2024 00:28:00.937 read: IOPS=1873, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5002msec) 00:28:00.937 slat (nsec): min=5325, max=47578, avg=7199.06, stdev=2149.52 00:28:00.937 clat (usec): min=2355, max=7546, avg=4251.49, stdev=679.47 00:28:00.937 lat (usec): min=2361, max=7551, avg=4258.69, stdev=679.43 00:28:00.937 clat percentiles (usec): 00:28:00.937 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3654], 00:28:00.937 | 30.00th=[ 3884], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4359], 00:28:00.937 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5145], 95.00th=[ 5473], 00:28:00.937 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6652], 99.95th=[ 6915], 00:28:00.937 | 99.99th=[ 7570] 00:28:00.937 bw ( KiB/s): min=14624, max=15232, per=23.14%, avg=14982.20, stdev=211.64, samples=10 00:28:00.937 iops : min= 1828, max= 1904, avg=1872.70, stdev=26.42, samples=10 00:28:00.937 lat (msec) : 4=36.87%, 10=63.13% 00:28:00.937 cpu : usr=97.24%, sys=2.50%, ctx=7, majf=0, minf=91 00:28:00.937 IO depths : 1=0.2%, 2=1.0%, 4=67.7%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 issued rwts: total=9370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:00.937 filename1: (groupid=0, jobs=1): err= 0: pid=306061: Sat Apr 27 02:47:33 2024 00:28:00.937 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5003msec) 00:28:00.937 slat (nsec): min=5322, max=31098, avg=5862.44, stdev=1449.59 00:28:00.937 clat (usec): min=2031, max=46778, avg=4299.11, stdev=1423.15 00:28:00.937 lat (usec): min=2037, max=46809, avg=4304.97, stdev=1423.38 00:28:00.937 clat percentiles (usec): 00:28:00.937 | 1.00th=[ 2802], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3687], 00:28:00.937 | 30.00th=[ 3884], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4359], 00:28:00.937 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5145], 95.00th=[ 5538], 00:28:00.937 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 7177], 99.95th=[46924], 00:28:00.937 | 99.99th=[46924] 00:28:00.937 bw ( KiB/s): min=13520, max=15344, per=22.90%, avg=14828.80, stdev=494.01, samples=10 00:28:00.937 iops : min= 1690, max= 1918, avg=1853.60, stdev=61.75, samples=10 00:28:00.937 lat (msec) : 4=35.82%, 10=64.09%, 50=0.09% 00:28:00.937 cpu : usr=96.92%, sys=2.82%, ctx=8, majf=0, minf=78 00:28:00.937 IO depths : 1=0.2%, 2=1.4%, 4=67.6%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 issued rwts: total=9276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:00.937 filename1: (groupid=0, jobs=1): err= 0: pid=306062: Sat Apr 27 02:47:33 2024 00:28:00.937 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5003msec) 00:28:00.937 slat (nsec): min=5315, max=45320, avg=5938.52, stdev=1841.77 00:28:00.937 clat (usec): min=2415, max=6943, avg=4179.60, stdev=665.59 00:28:00.937 lat (usec): min=2421, max=6976, avg=4185.54, stdev=665.59 00:28:00.937 clat percentiles (usec): 00:28:00.937 | 1.00th=[ 2802], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3621], 00:28:00.937 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4146], 60.00th=[ 4293], 00:28:00.937 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5407], 00:28:00.937 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6652], 99.95th=[ 6718], 00:28:00.937 | 99.99th=[ 6915] 00:28:00.937 bw ( KiB/s): min=14992, max=15408, per=23.56%, avg=15254.40, stdev=143.94, samples=10 00:28:00.937 iops : min= 1874, max= 1926, avg=1906.80, stdev=17.99, samples=10 00:28:00.937 lat (msec) : 4=41.63%, 10=58.37% 00:28:00.937 cpu : usr=97.62%, sys=2.12%, ctx=9, majf=0, minf=85 00:28:00.937 IO depths : 1=0.1%, 2=1.4%, 4=68.0%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.937 issued rwts: total=9539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:00.937 00:28:00.937 Run status group 0 (all jobs): 00:28:00.937 READ: bw=63.2MiB/s (66.3MB/s), 14.5MiB/s-19.2MiB/s (15.2MB/s-20.1MB/s), io=316MiB (332MB), run=5002-5003msec 00:28:00.937 02:47:33 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:00.937 02:47:33 -- target/dif.sh@43 -- # local sub 00:28:00.937 02:47:33 -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.937 02:47:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:00.937 02:47:33 -- target/dif.sh@36 -- # local sub_id=0 00:28:00.937 02:47:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:00.937 02:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 02:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:00.937 02:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 02:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:33 -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.937 02:47:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:00.937 02:47:33 -- target/dif.sh@36 -- # local sub_id=1 00:28:00.937 02:47:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.937 02:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 02:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:00.937 02:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 02:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 00:28:00.937 real 0m24.130s 00:28:00.937 user 5m9.616s 00:28:00.937 sys 0m3.869s 00:28:00.937 02:47:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:00.937 02:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 ************************************ 00:28:00.937 END TEST fio_dif_rand_params 00:28:00.937 ************************************ 00:28:00.937 02:47:33 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:00.937 02:47:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:00.937 02:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:00.937 02:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 ************************************ 00:28:00.937 START TEST fio_dif_digest 00:28:00.937 ************************************ 00:28:00.937 02:47:34 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:28:00.937 02:47:34 -- target/dif.sh@123 -- # local NULL_DIF 00:28:00.937 02:47:34 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:00.937 02:47:34 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:00.937 02:47:34 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:00.937 02:47:34 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:00.937 02:47:34 -- target/dif.sh@127 -- # numjobs=3 00:28:00.937 02:47:34 -- target/dif.sh@127 -- # iodepth=3 00:28:00.937 02:47:34 -- target/dif.sh@127 -- # runtime=10 00:28:00.937 02:47:34 -- target/dif.sh@128 -- # hdgst=true 00:28:00.937 02:47:34 -- target/dif.sh@128 -- # ddgst=true 00:28:00.937 02:47:34 -- target/dif.sh@130 -- # create_subsystems 0 00:28:00.937 02:47:34 -- target/dif.sh@28 -- # local sub 00:28:00.937 02:47:34 -- target/dif.sh@30 -- # for sub in "$@" 00:28:00.937 02:47:34 -- target/dif.sh@31 -- # create_subsystem 0 00:28:00.937 02:47:34 -- target/dif.sh@18 -- # local sub_id=0 00:28:00.937 02:47:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:00.937 02:47:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 bdev_null0 00:28:00.937 02:47:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:00.937 02:47:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 02:47:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:00.937 02:47:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 02:47:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:00.937 02:47:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.937 02:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.937 [2024-04-27 02:47:34.041218] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.937 02:47:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.937 02:47:34 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:00.937 02:47:34 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:00.937 02:47:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.937 02:47:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:00.937 02:47:34 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.937 02:47:34 -- nvmf/common.sh@521 -- # config=() 00:28:00.937 02:47:34 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:00.937 02:47:34 -- nvmf/common.sh@521 -- # local subsystem config 00:28:00.938 02:47:34 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:00.938 02:47:34 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:00.938 02:47:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:00.938 02:47:34 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.938 02:47:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:00.938 { 00:28:00.938 "params": { 00:28:00.938 "name": "Nvme$subsystem", 00:28:00.938 "trtype": "$TEST_TRANSPORT", 00:28:00.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.938 "adrfam": "ipv4", 00:28:00.938 "trsvcid": "$NVMF_PORT", 00:28:00.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.938 "hdgst": ${hdgst:-false}, 00:28:00.938 "ddgst": ${ddgst:-false} 00:28:00.938 }, 00:28:00.938 "method": "bdev_nvme_attach_controller" 00:28:00.938 } 00:28:00.938 EOF 00:28:00.938 )") 00:28:00.938 02:47:34 -- common/autotest_common.sh@1327 -- # shift 00:28:00.938 02:47:34 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:00.938 02:47:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.938 02:47:34 -- target/dif.sh@82 -- # gen_fio_conf 00:28:00.938 02:47:34 -- target/dif.sh@54 -- # local file 00:28:00.938 02:47:34 -- target/dif.sh@56 -- # cat 00:28:00.938 02:47:34 -- nvmf/common.sh@543 -- # cat 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:00.938 02:47:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:00.938 02:47:34 -- target/dif.sh@72 -- # (( file <= files )) 00:28:00.938 02:47:34 -- nvmf/common.sh@545 -- # jq . 00:28:00.938 02:47:34 -- nvmf/common.sh@546 -- # IFS=, 00:28:00.938 02:47:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:00.938 "params": { 00:28:00.938 "name": "Nvme0", 00:28:00.938 "trtype": "tcp", 00:28:00.938 "traddr": "10.0.0.2", 00:28:00.938 "adrfam": "ipv4", 00:28:00.938 "trsvcid": "4420", 00:28:00.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:00.938 "hdgst": true, 00:28:00.938 "ddgst": true 00:28:00.938 }, 00:28:00.938 "method": "bdev_nvme_attach_controller" 00:28:00.938 }' 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:00.938 02:47:34 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:00.938 02:47:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:00.938 02:47:34 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:00.938 02:47:34 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:00.938 02:47:34 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:00.938 02:47:34 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.938 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:00.938 ... 00:28:00.938 fio-3.35 00:28:00.938 Starting 3 threads 00:28:00.938 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.176 00:28:13.176 filename0: (groupid=0, jobs=1): err= 0: pid=307358: Sat Apr 27 02:47:44 2024 00:28:13.176 read: IOPS=123, BW=15.5MiB/s (16.2MB/s)(155MiB/10027msec) 00:28:13.176 slat (nsec): min=5695, max=31853, avg=6484.59, stdev=1125.82 00:28:13.176 clat (msec): min=8, max=100, avg=24.19, stdev=19.39 00:28:13.176 lat (msec): min=8, max=100, avg=24.19, stdev=19.39 00:28:13.176 clat percentiles (msec): 00:28:13.176 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:28:13.176 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:28:13.176 | 70.00th=[ 17], 80.00th=[ 54], 90.00th=[ 56], 95.00th=[ 57], 00:28:13.176 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 101], 00:28:13.176 | 99.99th=[ 101] 00:28:13.176 bw ( KiB/s): min=10752, max=20992, per=30.48%, avg=15870.35, stdev=2936.23, samples=20 00:28:13.176 iops : min= 84, max= 164, avg=123.95, stdev=22.93, samples=20 00:28:13.176 lat (msec) : 10=5.95%, 20=69.59%, 100=24.22%, 250=0.24% 00:28:13.176 cpu : usr=96.90%, sys=2.87%, ctx=14, majf=0, minf=149 00:28:13.176 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.176 issued rwts: total=1243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.176 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:13.176 filename0: (groupid=0, jobs=1): err= 0: pid=307359: Sat Apr 27 02:47:44 2024 00:28:13.176 read: IOPS=149, BW=18.7MiB/s (19.6MB/s)(187MiB/10032msec) 00:28:13.176 slat (nsec): min=8087, max=32396, avg=9562.49, stdev=1675.13 00:28:13.176 clat (usec): min=7276, max=98618, avg=20088.49, stdev=17363.60 00:28:13.176 lat (usec): min=7289, max=98627, avg=20098.05, stdev=17363.29 00:28:13.176 clat percentiles (usec): 00:28:13.176 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10290], 00:28:13.176 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13042], 60.00th=[13960], 00:28:13.176 | 70.00th=[15270], 80.00th=[17171], 90.00th=[54789], 95.00th=[55837], 00:28:13.176 | 99.00th=[61080], 99.50th=[95945], 99.90th=[98042], 99.95th=[99091], 00:28:13.176 | 99.99th=[99091] 00:28:13.176 bw ( KiB/s): min=13056, max=37376, per=36.73%, avg=19123.20, stdev=7098.90, samples=20 00:28:13.176 iops : min= 102, max= 292, avg=149.40, stdev=55.46, samples=20 00:28:13.176 lat (msec) : 10=16.90%, 20=65.87%, 50=0.13%, 100=17.10% 00:28:13.176 cpu : usr=96.53%, sys=3.17%, ctx=12, majf=0, minf=86 00:28:13.176 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.176 issued rwts: total=1497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.176 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:13.176 filename0: (groupid=0, jobs=1): err= 0: pid=307360: Sat Apr 27 02:47:44 2024 00:28:13.176 read: IOPS=133, BW=16.7MiB/s (17.6MB/s)(168MiB/10008msec) 00:28:13.176 slat (nsec): min=5542, max=32359, avg=6459.17, stdev=1145.13 00:28:13.176 clat (usec): min=7222, max=98893, avg=22377.26, stdev=19043.41 00:28:13.176 lat (usec): min=7229, max=98899, avg=22383.72, stdev=19043.40 00:28:13.176 clat percentiles (usec): 00:28:13.176 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11338], 00:28:13.176 | 30.00th=[12256], 40.00th=[13042], 50.00th=[14222], 60.00th=[15270], 00:28:13.176 | 70.00th=[16450], 80.00th=[20317], 90.00th=[55837], 95.00th=[56886], 00:28:13.176 | 99.00th=[95945], 99.50th=[95945], 99.90th=[98042], 99.95th=[99091], 00:28:13.176 | 99.99th=[99091] 00:28:13.176 bw ( KiB/s): min=12288, max=24832, per=32.89%, avg=17126.40, stdev=3085.33, samples=20 00:28:13.176 iops : min= 96, max= 194, avg=133.80, stdev=24.10, samples=20 00:28:13.176 lat (msec) : 10=8.35%, 20=71.36%, 50=0.30%, 100=19.99% 00:28:13.176 cpu : usr=97.13%, sys=2.61%, ctx=13, majf=0, minf=122 00:28:13.176 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.176 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.176 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:13.176 00:28:13.176 Run status group 0 (all jobs): 00:28:13.176 READ: bw=50.8MiB/s (53.3MB/s), 15.5MiB/s-18.7MiB/s (16.2MB/s-19.6MB/s), io=510MiB (535MB), run=10008-10032msec 00:28:13.176 02:47:44 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:13.176 02:47:44 -- target/dif.sh@43 -- # local sub 00:28:13.176 02:47:44 -- target/dif.sh@45 -- # for sub in "$@" 00:28:13.176 02:47:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:13.176 02:47:44 -- target/dif.sh@36 -- # local sub_id=0 00:28:13.176 02:47:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.176 02:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.176 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:28:13.176 02:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.176 02:47:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:13.176 02:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.176 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:28:13.176 02:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.176 00:28:13.176 real 0m10.994s 00:28:13.176 user 0m41.291s 00:28:13.176 sys 0m1.162s 00:28:13.176 02:47:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:13.176 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:28:13.176 ************************************ 00:28:13.176 END TEST fio_dif_digest 00:28:13.176 ************************************ 00:28:13.176 02:47:45 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:13.176 02:47:45 -- target/dif.sh@147 -- # nvmftestfini 00:28:13.176 02:47:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:13.176 02:47:45 -- nvmf/common.sh@117 -- # sync 00:28:13.176 02:47:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.176 02:47:45 -- nvmf/common.sh@120 -- # set +e 00:28:13.176 02:47:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.176 02:47:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.176 rmmod nvme_tcp 00:28:13.176 rmmod nvme_fabrics 00:28:13.176 rmmod nvme_keyring 00:28:13.176 02:47:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.176 02:47:45 -- nvmf/common.sh@124 -- # set -e 00:28:13.176 02:47:45 -- nvmf/common.sh@125 -- # return 0 00:28:13.176 02:47:45 -- nvmf/common.sh@478 -- # '[' -n 296851 ']' 00:28:13.176 02:47:45 -- nvmf/common.sh@479 -- # killprocess 296851 00:28:13.176 02:47:45 -- common/autotest_common.sh@936 -- # '[' -z 296851 ']' 00:28:13.176 02:47:45 -- common/autotest_common.sh@940 -- # kill -0 296851 00:28:13.176 02:47:45 -- common/autotest_common.sh@941 -- # uname 00:28:13.176 02:47:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:13.176 02:47:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 296851 00:28:13.176 02:47:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:13.176 02:47:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:13.176 02:47:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 296851' 00:28:13.176 killing process with pid 296851 00:28:13.176 02:47:45 -- common/autotest_common.sh@955 -- # kill 296851 00:28:13.176 02:47:45 -- common/autotest_common.sh@960 -- # wait 296851 00:28:13.176 02:47:45 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:13.176 02:47:45 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:14.563 Waiting for block devices as requested 00:28:14.563 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:14.825 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:14.825 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:14.825 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:14.825 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:15.086 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:15.086 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:15.086 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:15.086 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:15.348 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:15.348 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:15.348 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:15.609 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:15.609 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:15.609 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:15.609 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:15.871 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:15.871 02:47:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:15.871 02:47:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:15.871 02:47:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.871 02:47:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.871 02:47:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.871 02:47:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:15.871 02:47:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.783 02:47:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.783 00:28:17.783 real 1m15.490s 00:28:17.783 user 7m53.554s 00:28:17.783 sys 0m18.042s 00:28:17.783 02:47:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:17.783 02:47:51 -- common/autotest_common.sh@10 -- # set +x 00:28:17.783 ************************************ 00:28:17.783 END TEST nvmf_dif 00:28:17.783 ************************************ 00:28:17.783 02:47:51 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.783 02:47:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:17.783 02:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:17.783 02:47:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.044 ************************************ 00:28:18.044 START TEST nvmf_abort_qd_sizes 00:28:18.044 ************************************ 00:28:18.044 02:47:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:18.044 * Looking for test storage... 00:28:18.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:18.044 02:47:51 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.044 02:47:51 -- nvmf/common.sh@7 -- # uname -s 00:28:18.044 02:47:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.044 02:47:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.044 02:47:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.044 02:47:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.044 02:47:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.044 02:47:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.044 02:47:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.044 02:47:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.044 02:47:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.044 02:47:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.044 02:47:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:18.044 02:47:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:18.044 02:47:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.044 02:47:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.044 02:47:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.044 02:47:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.044 02:47:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.044 02:47:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.044 02:47:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.044 02:47:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.045 02:47:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 02:47:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 02:47:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 02:47:51 -- paths/export.sh@5 -- # export PATH 00:28:18.045 02:47:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 02:47:51 -- nvmf/common.sh@47 -- # : 0 00:28:18.045 02:47:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.045 02:47:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.045 02:47:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.045 02:47:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.045 02:47:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.045 02:47:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.045 02:47:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.045 02:47:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.045 02:47:51 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:18.045 02:47:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:18.045 02:47:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.045 02:47:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:18.045 02:47:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:18.045 02:47:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:18.045 02:47:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.045 02:47:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:18.045 02:47:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.045 02:47:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:18.045 02:47:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:18.045 02:47:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.045 02:47:51 -- common/autotest_common.sh@10 -- # set +x 00:28:26.189 02:47:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:26.189 02:47:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.189 02:47:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.189 02:47:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.189 02:47:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.189 02:47:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.189 02:47:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.189 02:47:58 -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.189 02:47:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.189 02:47:58 -- nvmf/common.sh@296 -- # e810=() 00:28:26.189 02:47:58 -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.189 02:47:58 -- nvmf/common.sh@297 -- # x722=() 00:28:26.189 02:47:58 -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.189 02:47:58 -- nvmf/common.sh@298 -- # mlx=() 00:28:26.189 02:47:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.189 02:47:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.189 02:47:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.189 02:47:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.189 02:47:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.189 02:47:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.189 02:47:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.189 02:47:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.190 02:47:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.190 02:47:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.190 02:47:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.190 02:47:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.190 02:47:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.190 02:47:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.190 02:47:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.190 02:47:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.190 02:47:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:26.190 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:26.190 02:47:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.190 02:47:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:26.190 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:26.190 02:47:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.190 02:47:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.190 02:47:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.190 02:47:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:26.190 02:47:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.190 02:47:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:26.190 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:26.190 02:47:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.190 02:47:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.190 02:47:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.190 02:47:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:26.190 02:47:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.190 02:47:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:26.190 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:26.190 02:47:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.190 02:47:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:26.190 02:47:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:26.190 02:47:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:26.190 02:47:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:26.190 02:47:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.190 02:47:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.190 02:47:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.190 02:47:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:26.190 02:47:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.190 02:47:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.190 02:47:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:26.190 02:47:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.190 02:47:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.190 02:47:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:26.190 02:47:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:26.190 02:47:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.190 02:47:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.190 02:47:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.190 02:47:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.190 02:47:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:26.190 02:47:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.190 02:47:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.190 02:47:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.190 02:47:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:26.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:28:26.190 00:28:26.190 --- 10.0.0.2 ping statistics --- 00:28:26.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.190 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:28:26.190 02:47:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:26.190 00:28:26.190 --- 10.0.0.1 ping statistics --- 00:28:26.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.190 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:26.190 02:47:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.190 02:47:58 -- nvmf/common.sh@411 -- # return 0 00:28:26.190 02:47:58 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:26.190 02:47:58 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:28.821 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:28.821 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:28.821 02:48:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.821 02:48:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:28.821 02:48:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:28.821 02:48:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.821 02:48:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:28.821 02:48:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:28.821 02:48:02 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:28.821 02:48:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:28.821 02:48:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:28.821 02:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.821 02:48:02 -- nvmf/common.sh@470 -- # nvmfpid=316704 00:28:28.821 02:48:02 -- nvmf/common.sh@471 -- # waitforlisten 316704 00:28:28.821 02:48:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:28.821 02:48:02 -- common/autotest_common.sh@817 -- # '[' -z 316704 ']' 00:28:28.821 02:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.821 02:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:28.821 02:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.821 02:48:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:28.821 02:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.821 [2024-04-27 02:48:02.180304] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:28.821 [2024-04-27 02:48:02.180350] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.821 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.821 [2024-04-27 02:48:02.244958] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.821 [2024-04-27 02:48:02.309604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.821 [2024-04-27 02:48:02.309639] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.821 [2024-04-27 02:48:02.309648] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.821 [2024-04-27 02:48:02.309656] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.821 [2024-04-27 02:48:02.309663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.821 [2024-04-27 02:48:02.309937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.821 [2024-04-27 02:48:02.309954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.821 [2024-04-27 02:48:02.310085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.821 [2024-04-27 02:48:02.310088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.393 02:48:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:29.393 02:48:02 -- common/autotest_common.sh@850 -- # return 0 00:28:29.393 02:48:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:29.393 02:48:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:29.393 02:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:29.393 02:48:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.393 02:48:02 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:29.393 02:48:02 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:29.393 02:48:02 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:29.393 02:48:02 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:29.393 02:48:02 -- scripts/common.sh@310 -- # local nvmes 00:28:29.393 02:48:02 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:28:29.393 02:48:02 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:29.393 02:48:02 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:29.393 02:48:02 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:28:29.393 02:48:02 -- scripts/common.sh@320 -- # uname -s 00:28:29.393 02:48:02 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:29.393 02:48:03 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:29.393 02:48:03 -- scripts/common.sh@325 -- # (( 1 )) 00:28:29.393 02:48:03 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:28:29.393 02:48:03 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:29.393 02:48:03 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:28:29.393 02:48:03 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:29.393 02:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:29.393 02:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:29.393 02:48:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.654 ************************************ 00:28:29.654 START TEST spdk_target_abort 00:28:29.654 ************************************ 00:28:29.654 02:48:03 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:29.654 02:48:03 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:29.654 02:48:03 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:28:29.654 02:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.654 02:48:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.914 spdk_targetn1 00:28:29.914 02:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.914 02:48:03 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.914 02:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.914 02:48:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.914 [2024-04-27 02:48:03.463382] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.914 02:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.914 02:48:03 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:29.915 02:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.915 02:48:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 02:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:29.915 02:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.915 02:48:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 02:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:29.915 02:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.915 02:48:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 [2024-04-27 02:48:03.503663] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.915 02:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:29.915 02:48:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.176 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.176 [2024-04-27 02:48:03.640244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:344 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:30.176 [2024-04-27 02:48:03.640266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:002d p:1 m:0 dnr:0 00:28:30.176 [2024-04-27 02:48:03.647763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:496 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:28:30.176 [2024-04-27 02:48:03.647778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:28:30.176 [2024-04-27 02:48:03.732759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2552 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:30.176 [2024-04-27 02:48:03.732777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:33.501 Initializing NVMe Controllers 00:28:33.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.501 Initialization complete. Launching workers. 00:28:33.501 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9195, failed: 3 00:28:33.501 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2873, failed to submit 6325 00:28:33.501 success 796, unsuccess 2077, failed 0 00:28:33.501 02:48:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:33.501 02:48:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.501 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.501 [2024-04-27 02:48:06.933390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:624 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:28:33.501 [2024-04-27 02:48:06.933430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:28:36.806 Initializing NVMe Controllers 00:28:36.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:36.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:36.806 Initialization complete. Launching workers. 00:28:36.806 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8695, failed: 1 00:28:36.806 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7461 00:28:36.806 success 340, unsuccess 895, failed 0 00:28:36.806 02:48:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:36.806 02:48:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:36.806 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.806 [2024-04-27 02:48:10.235713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:183 nsid:1 lba:768 len:8 PRP1 0x20000791c000 PRP2 0x0 00:28:36.806 [2024-04-27 02:48:10.235757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:183 cdw0:0 sqhd:0060 p:1 m:0 dnr:0 00:28:37.067 [2024-04-27 02:48:10.481945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:157 nsid:1 lba:26072 len:8 PRP1 0x2000078e8000 PRP2 0x0 00:28:37.067 [2024-04-27 02:48:10.481970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:157 cdw0:0 sqhd:00b4 p:1 m:0 dnr:0 00:28:38.981 [2024-04-27 02:48:12.105686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:194848 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:28:38.981 [2024-04-27 02:48:12.105711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.922 Initializing NVMe Controllers 00:28:39.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:39.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:39.922 Initialization complete. Launching workers. 00:28:39.922 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38949, failed: 3 00:28:39.922 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2478, failed to submit 36474 00:28:39.922 success 714, unsuccess 1764, failed 0 00:28:39.922 02:48:13 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:39.922 02:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.922 02:48:13 -- common/autotest_common.sh@10 -- # set +x 00:28:39.922 02:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.922 02:48:13 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:39.922 02:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:39.922 02:48:13 -- common/autotest_common.sh@10 -- # set +x 00:28:41.835 02:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.835 02:48:15 -- target/abort_qd_sizes.sh@61 -- # killprocess 316704 00:28:41.835 02:48:15 -- common/autotest_common.sh@936 -- # '[' -z 316704 ']' 00:28:41.835 02:48:15 -- common/autotest_common.sh@940 -- # kill -0 316704 00:28:41.835 02:48:15 -- common/autotest_common.sh@941 -- # uname 00:28:41.835 02:48:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:41.835 02:48:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 316704 00:28:41.835 02:48:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:41.835 02:48:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:41.835 02:48:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 316704' 00:28:41.835 killing process with pid 316704 00:28:41.835 02:48:15 -- common/autotest_common.sh@955 -- # kill 316704 00:28:41.835 02:48:15 -- common/autotest_common.sh@960 -- # wait 316704 00:28:41.835 00:28:41.835 real 0m12.151s 00:28:41.835 user 0m49.497s 00:28:41.835 sys 0m2.104s 00:28:41.835 02:48:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:41.835 02:48:15 -- common/autotest_common.sh@10 -- # set +x 00:28:41.835 ************************************ 00:28:41.836 END TEST spdk_target_abort 00:28:41.836 ************************************ 00:28:41.836 02:48:15 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:41.836 02:48:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:41.836 02:48:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:41.836 02:48:15 -- common/autotest_common.sh@10 -- # set +x 00:28:42.096 ************************************ 00:28:42.096 START TEST kernel_target_abort 00:28:42.096 ************************************ 00:28:42.096 02:48:15 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:42.096 02:48:15 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:42.096 02:48:15 -- nvmf/common.sh@717 -- # local ip 00:28:42.096 02:48:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:42.096 02:48:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:42.096 02:48:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.096 02:48:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.096 02:48:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:42.096 02:48:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.096 02:48:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:42.096 02:48:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:42.096 02:48:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:42.096 02:48:15 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:42.096 02:48:15 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:42.096 02:48:15 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:42.096 02:48:15 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:42.096 02:48:15 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:42.096 02:48:15 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:42.096 02:48:15 -- nvmf/common.sh@628 -- # local block nvme 00:28:42.096 02:48:15 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:42.096 02:48:15 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:42.096 02:48:15 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:42.097 02:48:15 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:45.403 Waiting for block devices as requested 00:28:45.403 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:45.403 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:45.403 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:45.665 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:45.665 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:45.665 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:45.665 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:45.926 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:45.926 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:46.186 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:46.186 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:46.186 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:46.186 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:46.448 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:46.448 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:46.448 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:46.448 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:46.709 02:48:20 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:46.709 02:48:20 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:46.709 02:48:20 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:46.709 02:48:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:46.709 02:48:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:46.709 02:48:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:46.709 02:48:20 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:46.709 02:48:20 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:46.709 02:48:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:46.709 No valid GPT data, bailing 00:28:46.709 02:48:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:46.709 02:48:20 -- scripts/common.sh@391 -- # pt= 00:28:46.709 02:48:20 -- scripts/common.sh@392 -- # return 1 00:28:46.709 02:48:20 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:46.709 02:48:20 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:46.709 02:48:20 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:46.709 02:48:20 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:46.709 02:48:20 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:46.709 02:48:20 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:46.709 02:48:20 -- nvmf/common.sh@656 -- # echo 1 00:28:46.709 02:48:20 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:46.709 02:48:20 -- nvmf/common.sh@658 -- # echo 1 00:28:46.709 02:48:20 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:46.710 02:48:20 -- nvmf/common.sh@661 -- # echo tcp 00:28:46.710 02:48:20 -- nvmf/common.sh@662 -- # echo 4420 00:28:46.710 02:48:20 -- nvmf/common.sh@663 -- # echo ipv4 00:28:46.710 02:48:20 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:46.710 02:48:20 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:46.710 00:28:46.710 Discovery Log Number of Records 2, Generation counter 2 00:28:46.710 =====Discovery Log Entry 0====== 00:28:46.710 trtype: tcp 00:28:46.710 adrfam: ipv4 00:28:46.710 subtype: current discovery subsystem 00:28:46.710 treq: not specified, sq flow control disable supported 00:28:46.710 portid: 1 00:28:46.710 trsvcid: 4420 00:28:46.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:46.710 traddr: 10.0.0.1 00:28:46.710 eflags: none 00:28:46.710 sectype: none 00:28:46.710 =====Discovery Log Entry 1====== 00:28:46.710 trtype: tcp 00:28:46.710 adrfam: ipv4 00:28:46.710 subtype: nvme subsystem 00:28:46.710 treq: not specified, sq flow control disable supported 00:28:46.710 portid: 1 00:28:46.710 trsvcid: 4420 00:28:46.710 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:46.710 traddr: 10.0.0.1 00:28:46.710 eflags: none 00:28:46.710 sectype: none 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.710 02:48:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.710 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.016 Initializing NVMe Controllers 00:28:50.016 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:50.016 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:50.016 Initialization complete. Launching workers. 00:28:50.016 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36486, failed: 0 00:28:50.016 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36486, failed to submit 0 00:28:50.016 success 0, unsuccess 36486, failed 0 00:28:50.016 02:48:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:50.016 02:48:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:50.016 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.318 Initializing NVMe Controllers 00:28:53.318 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.318 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:53.318 Initialization complete. Launching workers. 00:28:53.318 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75148, failed: 0 00:28:53.318 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18926, failed to submit 56222 00:28:53.318 success 0, unsuccess 18926, failed 0 00:28:53.318 02:48:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:53.318 02:48:26 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:53.318 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.864 Initializing NVMe Controllers 00:28:55.864 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:55.864 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:55.864 Initialization complete. Launching workers. 00:28:55.864 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73305, failed: 0 00:28:55.864 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18306, failed to submit 54999 00:28:55.864 success 0, unsuccess 18306, failed 0 00:28:55.864 02:48:29 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:55.864 02:48:29 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:55.864 02:48:29 -- nvmf/common.sh@675 -- # echo 0 00:28:55.864 02:48:29 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:55.864 02:48:29 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:55.864 02:48:29 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:55.864 02:48:29 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:55.864 02:48:29 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:55.864 02:48:29 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:55.864 02:48:29 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:59.172 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:59.172 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:59.434 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:59.434 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:59.434 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:59.434 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:01.351 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:01.351 00:29:01.351 real 0m19.184s 00:29:01.351 user 0m6.790s 00:29:01.351 sys 0m6.131s 00:29:01.351 02:48:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:01.351 02:48:34 -- common/autotest_common.sh@10 -- # set +x 00:29:01.351 ************************************ 00:29:01.351 END TEST kernel_target_abort 00:29:01.351 ************************************ 00:29:01.351 02:48:34 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:01.351 02:48:34 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:01.351 02:48:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:01.351 02:48:34 -- nvmf/common.sh@117 -- # sync 00:29:01.351 02:48:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:01.351 02:48:34 -- nvmf/common.sh@120 -- # set +e 00:29:01.351 02:48:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:01.351 02:48:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:01.351 rmmod nvme_tcp 00:29:01.351 rmmod nvme_fabrics 00:29:01.351 rmmod nvme_keyring 00:29:01.351 02:48:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:01.351 02:48:34 -- nvmf/common.sh@124 -- # set -e 00:29:01.351 02:48:34 -- nvmf/common.sh@125 -- # return 0 00:29:01.351 02:48:34 -- nvmf/common.sh@478 -- # '[' -n 316704 ']' 00:29:01.351 02:48:34 -- nvmf/common.sh@479 -- # killprocess 316704 00:29:01.351 02:48:34 -- common/autotest_common.sh@936 -- # '[' -z 316704 ']' 00:29:01.351 02:48:34 -- common/autotest_common.sh@940 -- # kill -0 316704 00:29:01.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (316704) - No such process 00:29:01.351 02:48:34 -- common/autotest_common.sh@963 -- # echo 'Process with pid 316704 is not found' 00:29:01.351 Process with pid 316704 is not found 00:29:01.351 02:48:34 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:01.351 02:48:34 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:04.662 Waiting for block devices as requested 00:29:04.662 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:04.662 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:04.923 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:04.923 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:04.923 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:04.923 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:05.230 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:05.230 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:05.230 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:05.504 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:05.504 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:05.504 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:05.504 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:05.765 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:05.765 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:05.765 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:06.026 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:06.026 02:48:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:06.026 02:48:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:06.026 02:48:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.026 02:48:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.026 02:48:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.026 02:48:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:06.026 02:48:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.961 02:48:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.961 00:29:07.961 real 0m49.972s 00:29:07.961 user 1m1.384s 00:29:07.961 sys 0m18.390s 00:29:07.961 02:48:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.961 02:48:41 -- common/autotest_common.sh@10 -- # set +x 00:29:07.961 ************************************ 00:29:07.961 END TEST nvmf_abort_qd_sizes 00:29:07.961 ************************************ 00:29:07.961 02:48:41 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:07.961 02:48:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.961 02:48:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.961 02:48:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.222 ************************************ 00:29:08.222 START TEST keyring_file 00:29:08.222 ************************************ 00:29:08.222 02:48:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:08.222 * Looking for test storage... 00:29:08.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:08.222 02:48:41 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:08.222 02:48:41 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.222 02:48:41 -- nvmf/common.sh@7 -- # uname -s 00:29:08.222 02:48:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.222 02:48:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.222 02:48:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.222 02:48:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.222 02:48:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.222 02:48:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.222 02:48:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.222 02:48:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.222 02:48:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.222 02:48:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.222 02:48:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:08.222 02:48:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:08.222 02:48:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.222 02:48:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.222 02:48:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.222 02:48:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.222 02:48:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.222 02:48:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.222 02:48:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.222 02:48:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.222 02:48:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.222 02:48:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.222 02:48:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.222 02:48:41 -- paths/export.sh@5 -- # export PATH 00:29:08.222 02:48:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.222 02:48:41 -- nvmf/common.sh@47 -- # : 0 00:29:08.222 02:48:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:08.222 02:48:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:08.222 02:48:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.222 02:48:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.222 02:48:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.222 02:48:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:08.222 02:48:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:08.222 02:48:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:08.222 02:48:41 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:08.222 02:48:41 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:08.222 02:48:41 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:08.222 02:48:41 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:08.222 02:48:41 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:08.222 02:48:41 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:08.222 02:48:41 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:08.222 02:48:41 -- keyring/common.sh@15 -- # local name key digest path 00:29:08.222 02:48:41 -- keyring/common.sh@17 -- # name=key0 00:29:08.222 02:48:41 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:08.222 02:48:41 -- keyring/common.sh@17 -- # digest=0 00:29:08.222 02:48:41 -- keyring/common.sh@18 -- # mktemp 00:29:08.222 02:48:41 -- keyring/common.sh@18 -- # path=/tmp/tmp.PCCy62p9xR 00:29:08.222 02:48:41 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:08.222 02:48:41 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:08.222 02:48:41 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:08.222 02:48:41 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:08.222 02:48:41 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:08.222 02:48:41 -- nvmf/common.sh@693 -- # digest=0 00:29:08.222 02:48:41 -- nvmf/common.sh@694 -- # python - 00:29:08.222 02:48:41 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PCCy62p9xR 00:29:08.222 02:48:41 -- keyring/common.sh@23 -- # echo /tmp/tmp.PCCy62p9xR 00:29:08.222 02:48:41 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.PCCy62p9xR 00:29:08.222 02:48:41 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:08.222 02:48:41 -- keyring/common.sh@15 -- # local name key digest path 00:29:08.222 02:48:41 -- keyring/common.sh@17 -- # name=key1 00:29:08.222 02:48:41 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:08.222 02:48:41 -- keyring/common.sh@17 -- # digest=0 00:29:08.222 02:48:41 -- keyring/common.sh@18 -- # mktemp 00:29:08.222 02:48:41 -- keyring/common.sh@18 -- # path=/tmp/tmp.xsMyFOwpwJ 00:29:08.222 02:48:41 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:08.222 02:48:41 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:08.222 02:48:41 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:08.222 02:48:41 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:08.222 02:48:41 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:08.223 02:48:41 -- nvmf/common.sh@693 -- # digest=0 00:29:08.223 02:48:41 -- nvmf/common.sh@694 -- # python - 00:29:08.483 02:48:41 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xsMyFOwpwJ 00:29:08.483 02:48:41 -- keyring/common.sh@23 -- # echo /tmp/tmp.xsMyFOwpwJ 00:29:08.483 02:48:41 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xsMyFOwpwJ 00:29:08.483 02:48:41 -- keyring/file.sh@30 -- # tgtpid=327272 00:29:08.483 02:48:41 -- keyring/file.sh@32 -- # waitforlisten 327272 00:29:08.483 02:48:41 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:08.483 02:48:41 -- common/autotest_common.sh@817 -- # '[' -z 327272 ']' 00:29:08.483 02:48:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.483 02:48:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:08.483 02:48:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.483 02:48:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:08.483 02:48:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.483 [2024-04-27 02:48:41.931001] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:08.483 [2024-04-27 02:48:41.931063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327272 ] 00:29:08.483 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.483 [2024-04-27 02:48:41.996490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.483 [2024-04-27 02:48:42.069178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.423 02:48:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:09.423 02:48:42 -- common/autotest_common.sh@850 -- # return 0 00:29:09.423 02:48:42 -- keyring/file.sh@33 -- # rpc_cmd 00:29:09.423 02:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.423 02:48:42 -- common/autotest_common.sh@10 -- # set +x 00:29:09.423 [2024-04-27 02:48:42.692174] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.423 null0 00:29:09.423 [2024-04-27 02:48:42.724220] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:09.423 [2024-04-27 02:48:42.724470] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:09.423 [2024-04-27 02:48:42.732234] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:09.423 02:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.423 02:48:42 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:09.423 02:48:42 -- common/autotest_common.sh@638 -- # local es=0 00:29:09.423 02:48:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:09.423 02:48:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:09.423 02:48:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:09.423 02:48:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:09.423 02:48:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:09.423 02:48:42 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:09.423 02:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.423 02:48:42 -- common/autotest_common.sh@10 -- # set +x 00:29:09.423 [2024-04-27 02:48:42.748272] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:09.423 { 00:29:09.423 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.423 "secure_channel": false, 00:29:09.423 "listen_address": { 00:29:09.423 "trtype": "tcp", 00:29:09.423 "traddr": "127.0.0.1", 00:29:09.423 "trsvcid": "4420" 00:29:09.423 }, 00:29:09.423 "method": "nvmf_subsystem_add_listener", 00:29:09.423 "req_id": 1 00:29:09.423 } 00:29:09.423 Got JSON-RPC error response 00:29:09.423 response: 00:29:09.423 { 00:29:09.423 "code": -32602, 00:29:09.423 "message": "Invalid parameters" 00:29:09.423 } 00:29:09.423 02:48:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:09.423 02:48:42 -- common/autotest_common.sh@641 -- # es=1 00:29:09.423 02:48:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:09.423 02:48:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:09.423 02:48:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:09.423 02:48:42 -- keyring/file.sh@46 -- # bperfpid=327573 00:29:09.423 02:48:42 -- keyring/file.sh@48 -- # waitforlisten 327573 /var/tmp/bperf.sock 00:29:09.423 02:48:42 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:09.423 02:48:42 -- common/autotest_common.sh@817 -- # '[' -z 327573 ']' 00:29:09.423 02:48:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.423 02:48:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:09.423 02:48:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.423 02:48:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:09.423 02:48:42 -- common/autotest_common.sh@10 -- # set +x 00:29:09.423 [2024-04-27 02:48:42.801481] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:09.423 [2024-04-27 02:48:42.801527] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327573 ] 00:29:09.423 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.423 [2024-04-27 02:48:42.858609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.423 [2024-04-27 02:48:42.920552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.994 02:48:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:09.994 02:48:43 -- common/autotest_common.sh@850 -- # return 0 00:29:09.994 02:48:43 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:09.994 02:48:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:10.255 02:48:43 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xsMyFOwpwJ 00:29:10.255 02:48:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xsMyFOwpwJ 00:29:10.516 02:48:43 -- keyring/file.sh@51 -- # get_key key0 00:29:10.516 02:48:43 -- keyring/file.sh@51 -- # jq -r .path 00:29:10.516 02:48:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.516 02:48:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.516 02:48:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.516 02:48:44 -- keyring/file.sh@51 -- # [[ /tmp/tmp.PCCy62p9xR == \/\t\m\p\/\t\m\p\.\P\C\C\y\6\2\p\9\x\R ]] 00:29:10.516 02:48:44 -- keyring/file.sh@52 -- # get_key key1 00:29:10.516 02:48:44 -- keyring/file.sh@52 -- # jq -r .path 00:29:10.516 02:48:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.516 02:48:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.516 02:48:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:10.778 02:48:44 -- keyring/file.sh@52 -- # [[ /tmp/tmp.xsMyFOwpwJ == \/\t\m\p\/\t\m\p\.\x\s\M\y\F\O\w\p\w\J ]] 00:29:10.778 02:48:44 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:10.778 02:48:44 -- keyring/common.sh@12 -- # get_key key0 00:29:10.778 02:48:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.778 02:48:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.778 02:48:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.778 02:48:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.778 02:48:44 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:10.778 02:48:44 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:10.778 02:48:44 -- keyring/common.sh@12 -- # get_key key1 00:29:10.778 02:48:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.778 02:48:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.778 02:48:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.778 02:48:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.040 02:48:44 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:11.040 02:48:44 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.040 02:48:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.301 [2024-04-27 02:48:44.673083] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:11.301 nvme0n1 00:29:11.301 02:48:44 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:11.301 02:48:44 -- keyring/common.sh@12 -- # get_key key0 00:29:11.301 02:48:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.301 02:48:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.301 02:48:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.301 02:48:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:11.563 02:48:44 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:11.563 02:48:44 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:11.563 02:48:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.563 02:48:44 -- keyring/common.sh@12 -- # get_key key1 00:29:11.563 02:48:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.563 02:48:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.563 02:48:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.563 02:48:45 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:11.563 02:48:45 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.563 Running I/O for 1 seconds... 00:29:12.949 00:29:12.949 Latency(us) 00:29:12.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.949 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:12.949 nvme0n1 : 1.05 3954.02 15.45 0.00 0.00 32195.46 3467.95 146800.64 00:29:12.949 =================================================================================================================== 00:29:12.949 Total : 3954.02 15.45 0.00 0.00 32195.46 3467.95 146800.64 00:29:12.949 0 00:29:12.949 02:48:46 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:12.949 02:48:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:12.949 02:48:46 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:12.949 02:48:46 -- keyring/common.sh@12 -- # get_key key0 00:29:12.949 02:48:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.949 02:48:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.949 02:48:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.949 02:48:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.949 02:48:46 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:13.210 02:48:46 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:13.210 02:48:46 -- keyring/common.sh@12 -- # get_key key1 00:29:13.210 02:48:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:13.210 02:48:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.210 02:48:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.210 02:48:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:13.210 02:48:46 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:13.210 02:48:46 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:13.210 02:48:46 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.211 02:48:46 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:13.211 02:48:46 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:13.211 02:48:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.211 02:48:46 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:13.211 02:48:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.211 02:48:46 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:13.211 02:48:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:13.471 [2024-04-27 02:48:46.884360] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:13.471 [2024-04-27 02:48:46.884883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x881090 (107): Transport endpoint is not connected 00:29:13.471 [2024-04-27 02:48:46.885877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x881090 (9): Bad file descriptor 00:29:13.471 [2024-04-27 02:48:46.886877] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:13.471 [2024-04-27 02:48:46.886886] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:13.471 [2024-04-27 02:48:46.886893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:13.471 request: 00:29:13.471 { 00:29:13.471 "name": "nvme0", 00:29:13.471 "trtype": "tcp", 00:29:13.471 "traddr": "127.0.0.1", 00:29:13.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.471 "adrfam": "ipv4", 00:29:13.471 "trsvcid": "4420", 00:29:13.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.471 "psk": "key1", 00:29:13.471 "method": "bdev_nvme_attach_controller", 00:29:13.471 "req_id": 1 00:29:13.471 } 00:29:13.471 Got JSON-RPC error response 00:29:13.471 response: 00:29:13.471 { 00:29:13.471 "code": -32602, 00:29:13.471 "message": "Invalid parameters" 00:29:13.471 } 00:29:13.471 02:48:46 -- common/autotest_common.sh@641 -- # es=1 00:29:13.471 02:48:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.471 02:48:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.471 02:48:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.471 02:48:46 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:13.471 02:48:46 -- keyring/common.sh@12 -- # get_key key0 00:29:13.471 02:48:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:13.471 02:48:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.471 02:48:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:13.471 02:48:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.471 02:48:47 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:13.471 02:48:47 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:13.471 02:48:47 -- keyring/common.sh@12 -- # get_key key1 00:29:13.471 02:48:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:13.471 02:48:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.471 02:48:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.471 02:48:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:13.733 02:48:47 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:13.734 02:48:47 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:13.734 02:48:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:13.994 02:48:47 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:13.994 02:48:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:13.994 02:48:47 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:13.994 02:48:47 -- keyring/file.sh@77 -- # jq length 00:29:13.995 02:48:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.256 02:48:47 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:14.256 02:48:47 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.PCCy62p9xR 00:29:14.256 02:48:47 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:14.256 02:48:47 -- common/autotest_common.sh@638 -- # local es=0 00:29:14.256 02:48:47 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:14.256 02:48:47 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:14.256 02:48:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:14.256 02:48:47 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:14.256 02:48:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:14.256 02:48:47 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:14.256 02:48:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:14.256 [2024-04-27 02:48:47.836421] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PCCy62p9xR': 0100660 00:29:14.256 [2024-04-27 02:48:47.836443] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:14.256 request: 00:29:14.256 { 00:29:14.256 "name": "key0", 00:29:14.256 "path": "/tmp/tmp.PCCy62p9xR", 00:29:14.256 "method": "keyring_file_add_key", 00:29:14.256 "req_id": 1 00:29:14.256 } 00:29:14.256 Got JSON-RPC error response 00:29:14.256 response: 00:29:14.256 { 00:29:14.256 "code": -1, 00:29:14.256 "message": "Operation not permitted" 00:29:14.256 } 00:29:14.256 02:48:47 -- common/autotest_common.sh@641 -- # es=1 00:29:14.256 02:48:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:14.256 02:48:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:14.256 02:48:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:14.256 02:48:47 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.PCCy62p9xR 00:29:14.256 02:48:47 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:14.256 02:48:47 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PCCy62p9xR 00:29:14.518 02:48:48 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.PCCy62p9xR 00:29:14.518 02:48:48 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:14.518 02:48:48 -- keyring/common.sh@12 -- # get_key key0 00:29:14.518 02:48:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.518 02:48:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.518 02:48:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.518 02:48:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.780 02:48:48 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:14.780 02:48:48 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.780 02:48:48 -- common/autotest_common.sh@638 -- # local es=0 00:29:14.780 02:48:48 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.780 02:48:48 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:14.780 02:48:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:14.780 02:48:48 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:14.780 02:48:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:14.780 02:48:48 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.780 02:48:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.780 [2024-04-27 02:48:48.313651] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.PCCy62p9xR': No such file or directory 00:29:14.780 [2024-04-27 02:48:48.313670] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:14.780 [2024-04-27 02:48:48.313692] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:14.780 [2024-04-27 02:48:48.313699] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:14.780 [2024-04-27 02:48:48.313710] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:14.780 request: 00:29:14.780 { 00:29:14.780 "name": "nvme0", 00:29:14.780 "trtype": "tcp", 00:29:14.780 "traddr": "127.0.0.1", 00:29:14.780 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.780 "adrfam": "ipv4", 00:29:14.780 "trsvcid": "4420", 00:29:14.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.780 "psk": "key0", 00:29:14.780 "method": "bdev_nvme_attach_controller", 00:29:14.780 "req_id": 1 00:29:14.780 } 00:29:14.780 Got JSON-RPC error response 00:29:14.780 response: 00:29:14.780 { 00:29:14.780 "code": -19, 00:29:14.780 "message": "No such device" 00:29:14.780 } 00:29:14.780 02:48:48 -- common/autotest_common.sh@641 -- # es=1 00:29:14.780 02:48:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:14.780 02:48:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:14.780 02:48:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:14.780 02:48:48 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:14.780 02:48:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:15.041 02:48:48 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:15.041 02:48:48 -- keyring/common.sh@15 -- # local name key digest path 00:29:15.041 02:48:48 -- keyring/common.sh@17 -- # name=key0 00:29:15.041 02:48:48 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:15.041 02:48:48 -- keyring/common.sh@17 -- # digest=0 00:29:15.041 02:48:48 -- keyring/common.sh@18 -- # mktemp 00:29:15.041 02:48:48 -- keyring/common.sh@18 -- # path=/tmp/tmp.igOCqKMcPd 00:29:15.041 02:48:48 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:15.041 02:48:48 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:15.041 02:48:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:15.041 02:48:48 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:15.041 02:48:48 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:15.041 02:48:48 -- nvmf/common.sh@693 -- # digest=0 00:29:15.041 02:48:48 -- nvmf/common.sh@694 -- # python - 00:29:15.041 02:48:48 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.igOCqKMcPd 00:29:15.041 02:48:48 -- keyring/common.sh@23 -- # echo /tmp/tmp.igOCqKMcPd 00:29:15.041 02:48:48 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.igOCqKMcPd 00:29:15.041 02:48:48 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.igOCqKMcPd 00:29:15.041 02:48:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.igOCqKMcPd 00:29:15.302 02:48:48 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.302 02:48:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.564 nvme0n1 00:29:15.564 02:48:48 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:15.564 02:48:48 -- keyring/common.sh@12 -- # get_key key0 00:29:15.564 02:48:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.564 02:48:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.564 02:48:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:15.564 02:48:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.564 02:48:49 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:15.564 02:48:49 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:15.564 02:48:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:15.825 02:48:49 -- keyring/file.sh@101 -- # get_key key0 00:29:15.825 02:48:49 -- keyring/file.sh@101 -- # jq -r .removed 00:29:15.825 02:48:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.825 02:48:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:15.825 02:48:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.825 02:48:49 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:15.825 02:48:49 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:15.825 02:48:49 -- keyring/common.sh@12 -- # get_key key0 00:29:15.825 02:48:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.087 02:48:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.087 02:48:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:16.087 02:48:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.087 02:48:49 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:16.087 02:48:49 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:16.087 02:48:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:16.349 02:48:49 -- keyring/file.sh@104 -- # jq length 00:29:16.349 02:48:49 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:16.349 02:48:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.349 02:48:49 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:16.349 02:48:49 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.igOCqKMcPd 00:29:16.349 02:48:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.igOCqKMcPd 00:29:16.610 02:48:50 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xsMyFOwpwJ 00:29:16.610 02:48:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xsMyFOwpwJ 00:29:16.872 02:48:50 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.872 02:48:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.872 nvme0n1 00:29:16.872 02:48:50 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:16.872 02:48:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:17.133 02:48:50 -- keyring/file.sh@112 -- # config='{ 00:29:17.133 "subsystems": [ 00:29:17.133 { 00:29:17.133 "subsystem": "keyring", 00:29:17.133 "config": [ 00:29:17.133 { 00:29:17.133 "method": "keyring_file_add_key", 00:29:17.133 "params": { 00:29:17.133 "name": "key0", 00:29:17.133 "path": "/tmp/tmp.igOCqKMcPd" 00:29:17.133 } 00:29:17.133 }, 00:29:17.133 { 00:29:17.133 "method": "keyring_file_add_key", 00:29:17.133 "params": { 00:29:17.133 "name": "key1", 00:29:17.133 "path": "/tmp/tmp.xsMyFOwpwJ" 00:29:17.133 } 00:29:17.133 } 00:29:17.133 ] 00:29:17.133 }, 00:29:17.133 { 00:29:17.133 "subsystem": "iobuf", 00:29:17.133 "config": [ 00:29:17.133 { 00:29:17.133 "method": "iobuf_set_options", 00:29:17.133 "params": { 00:29:17.133 "small_pool_count": 8192, 00:29:17.133 "large_pool_count": 1024, 00:29:17.133 "small_bufsize": 8192, 00:29:17.133 "large_bufsize": 135168 00:29:17.133 } 00:29:17.133 } 00:29:17.133 ] 00:29:17.133 }, 00:29:17.133 { 00:29:17.133 "subsystem": "sock", 00:29:17.133 "config": [ 00:29:17.133 { 00:29:17.133 "method": "sock_impl_set_options", 00:29:17.133 "params": { 00:29:17.133 "impl_name": "posix", 00:29:17.133 "recv_buf_size": 2097152, 00:29:17.133 "send_buf_size": 2097152, 00:29:17.133 "enable_recv_pipe": true, 00:29:17.133 "enable_quickack": false, 00:29:17.133 "enable_placement_id": 0, 00:29:17.133 "enable_zerocopy_send_server": true, 00:29:17.134 "enable_zerocopy_send_client": false, 00:29:17.134 "zerocopy_threshold": 0, 00:29:17.134 "tls_version": 0, 00:29:17.134 "enable_ktls": false 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "sock_impl_set_options", 00:29:17.134 "params": { 00:29:17.134 "impl_name": "ssl", 00:29:17.134 "recv_buf_size": 4096, 00:29:17.134 "send_buf_size": 4096, 00:29:17.134 "enable_recv_pipe": true, 00:29:17.134 "enable_quickack": false, 00:29:17.134 "enable_placement_id": 0, 00:29:17.134 "enable_zerocopy_send_server": true, 00:29:17.134 "enable_zerocopy_send_client": false, 00:29:17.134 "zerocopy_threshold": 0, 00:29:17.134 "tls_version": 0, 00:29:17.134 "enable_ktls": false 00:29:17.134 } 00:29:17.134 } 00:29:17.134 ] 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "subsystem": "vmd", 00:29:17.134 "config": [] 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "subsystem": "accel", 00:29:17.134 "config": [ 00:29:17.134 { 00:29:17.134 "method": "accel_set_options", 00:29:17.134 "params": { 00:29:17.134 "small_cache_size": 128, 00:29:17.134 "large_cache_size": 16, 00:29:17.134 "task_count": 2048, 00:29:17.134 "sequence_count": 2048, 00:29:17.134 "buf_count": 2048 00:29:17.134 } 00:29:17.134 } 00:29:17.134 ] 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "subsystem": "bdev", 00:29:17.134 "config": [ 00:29:17.134 { 00:29:17.134 "method": "bdev_set_options", 00:29:17.134 "params": { 00:29:17.134 "bdev_io_pool_size": 65535, 00:29:17.134 "bdev_io_cache_size": 256, 00:29:17.134 "bdev_auto_examine": true, 00:29:17.134 "iobuf_small_cache_size": 128, 00:29:17.134 "iobuf_large_cache_size": 16 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "bdev_raid_set_options", 00:29:17.134 "params": { 00:29:17.134 "process_window_size_kb": 1024 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "bdev_iscsi_set_options", 00:29:17.134 "params": { 00:29:17.134 "timeout_sec": 30 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "bdev_nvme_set_options", 00:29:17.134 "params": { 00:29:17.134 "action_on_timeout": "none", 00:29:17.134 "timeout_us": 0, 00:29:17.134 "timeout_admin_us": 0, 00:29:17.134 "keep_alive_timeout_ms": 10000, 00:29:17.134 "arbitration_burst": 0, 00:29:17.134 "low_priority_weight": 0, 00:29:17.134 "medium_priority_weight": 0, 00:29:17.134 "high_priority_weight": 0, 00:29:17.134 "nvme_adminq_poll_period_us": 10000, 00:29:17.134 "nvme_ioq_poll_period_us": 0, 00:29:17.134 "io_queue_requests": 512, 00:29:17.134 "delay_cmd_submit": true, 00:29:17.134 "transport_retry_count": 4, 00:29:17.134 "bdev_retry_count": 3, 00:29:17.134 "transport_ack_timeout": 0, 00:29:17.134 "ctrlr_loss_timeout_sec": 0, 00:29:17.134 "reconnect_delay_sec": 0, 00:29:17.134 "fast_io_fail_timeout_sec": 0, 00:29:17.134 "disable_auto_failback": false, 00:29:17.134 "generate_uuids": false, 00:29:17.134 "transport_tos": 0, 00:29:17.134 "nvme_error_stat": false, 00:29:17.134 "rdma_srq_size": 0, 00:29:17.134 "io_path_stat": false, 00:29:17.134 "allow_accel_sequence": false, 00:29:17.134 "rdma_max_cq_size": 0, 00:29:17.134 "rdma_cm_event_timeout_ms": 0, 00:29:17.134 "dhchap_digests": [ 00:29:17.134 "sha256", 00:29:17.134 "sha384", 00:29:17.134 "sha512" 00:29:17.134 ], 00:29:17.134 "dhchap_dhgroups": [ 00:29:17.134 "null", 00:29:17.134 "ffdhe2048", 00:29:17.134 "ffdhe3072", 00:29:17.134 "ffdhe4096", 00:29:17.134 "ffdhe6144", 00:29:17.134 "ffdhe8192" 00:29:17.134 ] 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "bdev_nvme_attach_controller", 00:29:17.134 "params": { 00:29:17.134 "name": "nvme0", 00:29:17.134 "trtype": "TCP", 00:29:17.134 "adrfam": "IPv4", 00:29:17.134 "traddr": "127.0.0.1", 00:29:17.134 "trsvcid": "4420", 00:29:17.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.134 "prchk_reftag": false, 00:29:17.134 "prchk_guard": false, 00:29:17.134 "ctrlr_loss_timeout_sec": 0, 00:29:17.134 "reconnect_delay_sec": 0, 00:29:17.134 "fast_io_fail_timeout_sec": 0, 00:29:17.134 "psk": "key0", 00:29:17.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.134 "hdgst": false, 00:29:17.134 "ddgst": false 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "bdev_nvme_set_hotplug", 00:29:17.134 "params": { 00:29:17.134 "period_us": 100000, 00:29:17.134 "enable": false 00:29:17.134 } 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "method": "bdev_wait_for_examine" 00:29:17.134 } 00:29:17.134 ] 00:29:17.134 }, 00:29:17.134 { 00:29:17.134 "subsystem": "nbd", 00:29:17.134 "config": [] 00:29:17.134 } 00:29:17.134 ] 00:29:17.134 }' 00:29:17.134 02:48:50 -- keyring/file.sh@114 -- # killprocess 327573 00:29:17.134 02:48:50 -- common/autotest_common.sh@936 -- # '[' -z 327573 ']' 00:29:17.134 02:48:50 -- common/autotest_common.sh@940 -- # kill -0 327573 00:29:17.134 02:48:50 -- common/autotest_common.sh@941 -- # uname 00:29:17.134 02:48:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.134 02:48:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 327573 00:29:17.397 02:48:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:17.397 02:48:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:17.397 02:48:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 327573' 00:29:17.397 killing process with pid 327573 00:29:17.397 02:48:50 -- common/autotest_common.sh@955 -- # kill 327573 00:29:17.397 Received shutdown signal, test time was about 1.000000 seconds 00:29:17.397 00:29:17.397 Latency(us) 00:29:17.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.397 =================================================================================================================== 00:29:17.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.397 02:48:50 -- common/autotest_common.sh@960 -- # wait 327573 00:29:17.397 02:48:50 -- keyring/file.sh@117 -- # bperfpid=329133 00:29:17.397 02:48:50 -- keyring/file.sh@119 -- # waitforlisten 329133 /var/tmp/bperf.sock 00:29:17.397 02:48:50 -- common/autotest_common.sh@817 -- # '[' -z 329133 ']' 00:29:17.397 02:48:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.397 02:48:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:17.397 02:48:50 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:17.397 02:48:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.397 02:48:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:17.397 02:48:50 -- common/autotest_common.sh@10 -- # set +x 00:29:17.397 02:48:50 -- keyring/file.sh@115 -- # echo '{ 00:29:17.397 "subsystems": [ 00:29:17.397 { 00:29:17.397 "subsystem": "keyring", 00:29:17.397 "config": [ 00:29:17.397 { 00:29:17.397 "method": "keyring_file_add_key", 00:29:17.397 "params": { 00:29:17.397 "name": "key0", 00:29:17.397 "path": "/tmp/tmp.igOCqKMcPd" 00:29:17.397 } 00:29:17.397 }, 00:29:17.397 { 00:29:17.397 "method": "keyring_file_add_key", 00:29:17.397 "params": { 00:29:17.398 "name": "key1", 00:29:17.398 "path": "/tmp/tmp.xsMyFOwpwJ" 00:29:17.398 } 00:29:17.398 } 00:29:17.398 ] 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "subsystem": "iobuf", 00:29:17.398 "config": [ 00:29:17.398 { 00:29:17.398 "method": "iobuf_set_options", 00:29:17.398 "params": { 00:29:17.398 "small_pool_count": 8192, 00:29:17.398 "large_pool_count": 1024, 00:29:17.398 "small_bufsize": 8192, 00:29:17.398 "large_bufsize": 135168 00:29:17.398 } 00:29:17.398 } 00:29:17.398 ] 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "subsystem": "sock", 00:29:17.398 "config": [ 00:29:17.398 { 00:29:17.398 "method": "sock_impl_set_options", 00:29:17.398 "params": { 00:29:17.398 "impl_name": "posix", 00:29:17.398 "recv_buf_size": 2097152, 00:29:17.398 "send_buf_size": 2097152, 00:29:17.398 "enable_recv_pipe": true, 00:29:17.398 "enable_quickack": false, 00:29:17.398 "enable_placement_id": 0, 00:29:17.398 "enable_zerocopy_send_server": true, 00:29:17.398 "enable_zerocopy_send_client": false, 00:29:17.398 "zerocopy_threshold": 0, 00:29:17.398 "tls_version": 0, 00:29:17.398 "enable_ktls": false 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "sock_impl_set_options", 00:29:17.398 "params": { 00:29:17.398 "impl_name": "ssl", 00:29:17.398 "recv_buf_size": 4096, 00:29:17.398 "send_buf_size": 4096, 00:29:17.398 "enable_recv_pipe": true, 00:29:17.398 "enable_quickack": false, 00:29:17.398 "enable_placement_id": 0, 00:29:17.398 "enable_zerocopy_send_server": true, 00:29:17.398 "enable_zerocopy_send_client": false, 00:29:17.398 "zerocopy_threshold": 0, 00:29:17.398 "tls_version": 0, 00:29:17.398 "enable_ktls": false 00:29:17.398 } 00:29:17.398 } 00:29:17.398 ] 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "subsystem": "vmd", 00:29:17.398 "config": [] 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "subsystem": "accel", 00:29:17.398 "config": [ 00:29:17.398 { 00:29:17.398 "method": "accel_set_options", 00:29:17.398 "params": { 00:29:17.398 "small_cache_size": 128, 00:29:17.398 "large_cache_size": 16, 00:29:17.398 "task_count": 2048, 00:29:17.398 "sequence_count": 2048, 00:29:17.398 "buf_count": 2048 00:29:17.398 } 00:29:17.398 } 00:29:17.398 ] 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "subsystem": "bdev", 00:29:17.398 "config": [ 00:29:17.398 { 00:29:17.398 "method": "bdev_set_options", 00:29:17.398 "params": { 00:29:17.398 "bdev_io_pool_size": 65535, 00:29:17.398 "bdev_io_cache_size": 256, 00:29:17.398 "bdev_auto_examine": true, 00:29:17.398 "iobuf_small_cache_size": 128, 00:29:17.398 "iobuf_large_cache_size": 16 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "bdev_raid_set_options", 00:29:17.398 "params": { 00:29:17.398 "process_window_size_kb": 1024 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "bdev_iscsi_set_options", 00:29:17.398 "params": { 00:29:17.398 "timeout_sec": 30 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "bdev_nvme_set_options", 00:29:17.398 "params": { 00:29:17.398 "action_on_timeout": "none", 00:29:17.398 "timeout_us": 0, 00:29:17.398 "timeout_admin_us": 0, 00:29:17.398 "keep_alive_timeout_ms": 10000, 00:29:17.398 "arbitration_burst": 0, 00:29:17.398 "low_priority_weight": 0, 00:29:17.398 "medium_priority_weight": 0, 00:29:17.398 "high_priority_weight": 0, 00:29:17.398 "nvme_adminq_poll_period_us": 10000, 00:29:17.398 "nvme_ioq_poll_period_us": 0, 00:29:17.398 "io_queue_requests": 512, 00:29:17.398 "delay_cmd_submit": true, 00:29:17.398 "transport_retry_count": 4, 00:29:17.398 "bdev_retry_count": 3, 00:29:17.398 "transport_ack_timeout": 0, 00:29:17.398 "ctrlr_loss_timeout_sec": 0, 00:29:17.398 "reconnect_delay_sec": 0, 00:29:17.398 "fast_io_fail_timeout_sec": 0, 00:29:17.398 "disable_auto_failback": false, 00:29:17.398 "generate_uuids": false, 00:29:17.398 "transport_tos": 0, 00:29:17.398 "nvme_error_stat": false, 00:29:17.398 "rdma_srq_size": 0, 00:29:17.398 "io_path_stat": false, 00:29:17.398 "allow_accel_sequence": false, 00:29:17.398 "rdma_max_cq_size": 0, 00:29:17.398 "rdma_cm_event_timeout_ms": 0, 00:29:17.398 "dhchap_digests": [ 00:29:17.398 "sha256", 00:29:17.398 "sha384", 00:29:17.398 "sha512" 00:29:17.398 ], 00:29:17.398 "dhchap_dhgroups": [ 00:29:17.398 "null", 00:29:17.398 "ffdhe2048", 00:29:17.398 "ffdhe3072", 00:29:17.398 "ffdhe4096", 00:29:17.398 "ffdhe6144", 00:29:17.398 "ffdhe8192" 00:29:17.398 ] 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "bdev_nvme_attach_controller", 00:29:17.398 "params": { 00:29:17.398 "name": "nvme0", 00:29:17.398 "trtype": "TCP", 00:29:17.398 "adrfam": "IPv4", 00:29:17.398 "traddr": "127.0.0.1", 00:29:17.398 "trsvcid": "4420", 00:29:17.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.398 "prchk_reftag": false, 00:29:17.398 "prchk_guard": false, 00:29:17.398 "ctrlr_loss_timeout_sec": 0, 00:29:17.398 "reconnect_delay_sec": 0, 00:29:17.398 "fast_io_fail_timeout_sec": 0, 00:29:17.398 "psk": "key0", 00:29:17.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:17.398 "hdgst": false, 00:29:17.398 "ddgst": false 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "bdev_nvme_set_hotplug", 00:29:17.398 "params": { 00:29:17.398 "period_us": 100000, 00:29:17.398 "enable": false 00:29:17.398 } 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "method": "bdev_wait_for_examine" 00:29:17.398 } 00:29:17.398 ] 00:29:17.398 }, 00:29:17.398 { 00:29:17.398 "subsystem": "nbd", 00:29:17.398 "config": [] 00:29:17.398 } 00:29:17.398 ] 00:29:17.398 }' 00:29:17.398 [2024-04-27 02:48:50.936813] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:17.398 [2024-04-27 02:48:50.936867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329133 ] 00:29:17.398 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.398 [2024-04-27 02:48:50.994066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.660 [2024-04-27 02:48:51.056674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.660 [2024-04-27 02:48:51.195556] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:18.233 02:48:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:18.233 02:48:51 -- common/autotest_common.sh@850 -- # return 0 00:29:18.233 02:48:51 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:18.233 02:48:51 -- keyring/file.sh@120 -- # jq length 00:29:18.233 02:48:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.494 02:48:51 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:18.494 02:48:51 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:18.494 02:48:51 -- keyring/common.sh@12 -- # get_key key0 00:29:18.494 02:48:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.494 02:48:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.494 02:48:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:18.494 02:48:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.494 02:48:52 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:18.494 02:48:52 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:18.494 02:48:52 -- keyring/common.sh@12 -- # get_key key1 00:29:18.494 02:48:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.494 02:48:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.494 02:48:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.494 02:48:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:18.754 02:48:52 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:18.754 02:48:52 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:18.755 02:48:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:18.755 02:48:52 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:18.755 02:48:52 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:18.755 02:48:52 -- keyring/file.sh@1 -- # cleanup 00:29:18.755 02:48:52 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.igOCqKMcPd /tmp/tmp.xsMyFOwpwJ 00:29:18.755 02:48:52 -- keyring/file.sh@20 -- # killprocess 329133 00:29:18.755 02:48:52 -- common/autotest_common.sh@936 -- # '[' -z 329133 ']' 00:29:18.755 02:48:52 -- common/autotest_common.sh@940 -- # kill -0 329133 00:29:18.755 02:48:52 -- common/autotest_common.sh@941 -- # uname 00:29:18.755 02:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:18.755 02:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 329133 00:29:19.016 02:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:19.016 02:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:19.016 02:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 329133' 00:29:19.016 killing process with pid 329133 00:29:19.016 02:48:52 -- common/autotest_common.sh@955 -- # kill 329133 00:29:19.016 Received shutdown signal, test time was about 1.000000 seconds 00:29:19.016 00:29:19.016 Latency(us) 00:29:19.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.016 =================================================================================================================== 00:29:19.016 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:19.016 02:48:52 -- common/autotest_common.sh@960 -- # wait 329133 00:29:19.016 02:48:52 -- keyring/file.sh@21 -- # killprocess 327272 00:29:19.016 02:48:52 -- common/autotest_common.sh@936 -- # '[' -z 327272 ']' 00:29:19.016 02:48:52 -- common/autotest_common.sh@940 -- # kill -0 327272 00:29:19.016 02:48:52 -- common/autotest_common.sh@941 -- # uname 00:29:19.016 02:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.016 02:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 327272 00:29:19.016 02:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:19.016 02:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:19.016 02:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 327272' 00:29:19.016 killing process with pid 327272 00:29:19.016 02:48:52 -- common/autotest_common.sh@955 -- # kill 327272 00:29:19.016 [2024-04-27 02:48:52.597656] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:19.016 02:48:52 -- common/autotest_common.sh@960 -- # wait 327272 00:29:19.277 00:29:19.277 real 0m11.202s 00:29:19.277 user 0m26.403s 00:29:19.277 sys 0m2.477s 00:29:19.277 02:48:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:19.277 02:48:52 -- common/autotest_common.sh@10 -- # set +x 00:29:19.277 ************************************ 00:29:19.277 END TEST keyring_file 00:29:19.277 ************************************ 00:29:19.277 02:48:52 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:19.277 02:48:52 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:19.277 02:48:52 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:19.277 02:48:52 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:19.277 02:48:52 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:19.277 02:48:52 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:19.277 02:48:52 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:19.277 02:48:52 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:19.277 02:48:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:19.277 02:48:52 -- common/autotest_common.sh@10 -- # set +x 00:29:19.277 02:48:52 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:19.277 02:48:52 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:19.277 02:48:52 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:19.277 02:48:52 -- common/autotest_common.sh@10 -- # set +x 00:29:27.424 INFO: APP EXITING 00:29:27.424 INFO: killing all VMs 00:29:27.424 INFO: killing vhost app 00:29:27.424 WARN: no vhost pid file found 00:29:27.424 INFO: EXIT DONE 00:29:29.972 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:29:29.972 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:29:29.972 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:29:29.972 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:29:29.972 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:65:00.0 (144d a80a): Already using the nvme driver 00:29:30.233 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:29:30.233 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:29:30.494 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:29:33.043 Cleaning 00:29:33.043 Removing: /var/run/dpdk/spdk0/config 00:29:33.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:33.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:33.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:33.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:33.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:33.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:33.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:33.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:33.305 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:33.305 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:33.305 Removing: /var/run/dpdk/spdk1/config 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:33.305 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:33.305 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:33.305 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:33.305 Removing: /var/run/dpdk/spdk2/config 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:33.305 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:33.305 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:33.306 Removing: /var/run/dpdk/spdk3/config 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:33.306 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:33.306 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:33.306 Removing: /var/run/dpdk/spdk4/config 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:33.306 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:33.306 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:33.306 Removing: /dev/shm/bdev_svc_trace.1 00:29:33.306 Removing: /dev/shm/nvmf_trace.0 00:29:33.306 Removing: /dev/shm/spdk_tgt_trace.pid4104567 00:29:33.306 Removing: /var/run/dpdk/spdk0 00:29:33.306 Removing: /var/run/dpdk/spdk1 00:29:33.306 Removing: /var/run/dpdk/spdk2 00:29:33.306 Removing: /var/run/dpdk/spdk3 00:29:33.306 Removing: /var/run/dpdk/spdk4 00:29:33.306 Removing: /var/run/dpdk/spdk_pid101338 00:29:33.306 Removing: /var/run/dpdk/spdk_pid102431 00:29:33.306 Removing: /var/run/dpdk/spdk_pid11634 00:29:33.306 Removing: /var/run/dpdk/spdk_pid123239 00:29:33.306 Removing: /var/run/dpdk/spdk_pid127903 00:29:33.306 Removing: /var/run/dpdk/spdk_pid133282 00:29:33.306 Removing: /var/run/dpdk/spdk_pid135198 00:29:33.306 Removing: /var/run/dpdk/spdk_pid137317 00:29:33.306 Removing: /var/run/dpdk/spdk_pid137652 00:29:33.306 Removing: /var/run/dpdk/spdk_pid137846 00:29:33.306 Removing: /var/run/dpdk/spdk_pid138014 00:29:33.306 Removing: /var/run/dpdk/spdk_pid138724 00:29:33.306 Removing: /var/run/dpdk/spdk_pid140743 00:29:33.306 Removing: /var/run/dpdk/spdk_pid141823 00:29:33.306 Removing: /var/run/dpdk/spdk_pid142325 00:29:33.306 Removing: /var/run/dpdk/spdk_pid144901 00:29:33.306 Removing: /var/run/dpdk/spdk_pid145608 00:29:33.306 Removing: /var/run/dpdk/spdk_pid146490 00:29:33.306 Removing: /var/run/dpdk/spdk_pid151370 00:29:33.306 Removing: /var/run/dpdk/spdk_pid164071 00:29:33.568 Removing: /var/run/dpdk/spdk_pid169000 00:29:33.568 Removing: /var/run/dpdk/spdk_pid176229 00:29:33.568 Removing: /var/run/dpdk/spdk_pid177733 00:29:33.568 Removing: /var/run/dpdk/spdk_pid179584 00:29:33.568 Removing: /var/run/dpdk/spdk_pid18023 00:29:33.568 Removing: /var/run/dpdk/spdk_pid184676 00:29:33.568 Removing: /var/run/dpdk/spdk_pid189633 00:29:33.568 Removing: /var/run/dpdk/spdk_pid198609 00:29:33.568 Removing: /var/run/dpdk/spdk_pid198711 00:29:33.568 Removing: /var/run/dpdk/spdk_pid203679 00:29:33.568 Removing: /var/run/dpdk/spdk_pid203875 00:29:33.568 Removing: /var/run/dpdk/spdk_pid204206 00:29:33.568 Removing: /var/run/dpdk/spdk_pid204584 00:29:33.568 Removing: /var/run/dpdk/spdk_pid204685 00:29:33.568 Removing: /var/run/dpdk/spdk_pid209943 00:29:33.568 Removing: /var/run/dpdk/spdk_pid210749 00:29:33.568 Removing: /var/run/dpdk/spdk_pid216393 00:29:33.568 Removing: /var/run/dpdk/spdk_pid219838 00:29:33.568 Removing: /var/run/dpdk/spdk_pid226230 00:29:33.568 Removing: /var/run/dpdk/spdk_pid23162 00:29:33.568 Removing: /var/run/dpdk/spdk_pid232467 00:29:33.568 Removing: /var/run/dpdk/spdk_pid23981 00:29:33.568 Removing: /var/run/dpdk/spdk_pid240991 00:29:33.568 Removing: /var/run/dpdk/spdk_pid241023 00:29:33.568 Removing: /var/run/dpdk/spdk_pid262861 00:29:33.568 Removing: /var/run/dpdk/spdk_pid263548 00:29:33.568 Removing: /var/run/dpdk/spdk_pid264222 00:29:33.568 Removing: /var/run/dpdk/spdk_pid264739 00:29:33.568 Removing: /var/run/dpdk/spdk_pid265814 00:29:33.568 Removing: /var/run/dpdk/spdk_pid266621 00:29:33.568 Removing: /var/run/dpdk/spdk_pid267440 00:29:33.568 Removing: /var/run/dpdk/spdk_pid268590 00:29:33.568 Removing: /var/run/dpdk/spdk_pid273635 00:29:33.568 Removing: /var/run/dpdk/spdk_pid273976 00:29:33.568 Removing: /var/run/dpdk/spdk_pid281046 00:29:33.568 Removing: /var/run/dpdk/spdk_pid281390 00:29:33.568 Removing: /var/run/dpdk/spdk_pid284082 00:29:33.568 Removing: /var/run/dpdk/spdk_pid291346 00:29:33.568 Removing: /var/run/dpdk/spdk_pid291351 00:29:33.568 Removing: /var/run/dpdk/spdk_pid297226 00:29:33.568 Removing: /var/run/dpdk/spdk_pid299600 00:29:33.568 Removing: /var/run/dpdk/spdk_pid301959 00:29:33.568 Removing: /var/run/dpdk/spdk_pid303358 00:29:33.568 Removing: /var/run/dpdk/spdk_pid305674 00:29:33.568 Removing: /var/run/dpdk/spdk_pid307200 00:29:33.568 Removing: /var/run/dpdk/spdk_pid316940 00:29:33.568 Removing: /var/run/dpdk/spdk_pid317606 00:29:33.568 Removing: /var/run/dpdk/spdk_pid318580 00:29:33.568 Removing: /var/run/dpdk/spdk_pid321656 00:29:33.568 Removing: /var/run/dpdk/spdk_pid322094 00:29:33.568 Removing: /var/run/dpdk/spdk_pid322693 00:29:33.568 Removing: /var/run/dpdk/spdk_pid327272 00:29:33.568 Removing: /var/run/dpdk/spdk_pid327573 00:29:33.568 Removing: /var/run/dpdk/spdk_pid329133 00:29:33.568 Removing: /var/run/dpdk/spdk_pid37929 00:29:33.568 Removing: /var/run/dpdk/spdk_pid37941 00:29:33.568 Removing: /var/run/dpdk/spdk_pid38944 00:29:33.568 Removing: /var/run/dpdk/spdk_pid40836 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4103061 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4104567 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4105455 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4106567 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4106846 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4108096 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4108255 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4108710 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4109537 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4110305 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4110757 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4111265 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4111635 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4112021 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4112201 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4112489 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4112880 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4114294 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4117719 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4117952 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4118366 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4118636 00:29:33.568 Removing: /var/run/dpdk/spdk_pid4119020 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4119307 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4119733 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4119886 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4120146 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4120454 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4120646 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4120837 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4121293 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4121654 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4122050 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4122438 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4122466 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4122790 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4123015 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4123277 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4123633 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4123989 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4124350 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4124676 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4124905 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4125130 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4125466 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4125825 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4126182 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4126544 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4126848 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4127077 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4127315 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4127659 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4128023 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4128386 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4128749 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4129079 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4129184 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4129597 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4133806 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4188677 00:29:33.830 Removing: /var/run/dpdk/spdk_pid4193738 00:29:33.830 Removing: /var/run/dpdk/spdk_pid42708 00:29:33.830 Removing: /var/run/dpdk/spdk_pid43379 00:29:33.830 Removing: /var/run/dpdk/spdk_pid43460 00:29:33.830 Removing: /var/run/dpdk/spdk_pid43717 00:29:33.830 Removing: /var/run/dpdk/spdk_pid43940 00:29:33.830 Removing: /var/run/dpdk/spdk_pid44053 00:29:33.830 Removing: /var/run/dpdk/spdk_pid45057 00:29:33.830 Removing: /var/run/dpdk/spdk_pid46061 00:29:33.830 Removing: /var/run/dpdk/spdk_pid47073 00:29:33.830 Removing: /var/run/dpdk/spdk_pid47743 00:29:33.830 Removing: /var/run/dpdk/spdk_pid47745 00:29:33.830 Removing: /var/run/dpdk/spdk_pid48083 00:29:33.830 Removing: /var/run/dpdk/spdk_pid49520 00:29:33.830 Removing: /var/run/dpdk/spdk_pid50872 00:29:33.830 Removing: /var/run/dpdk/spdk_pid61193 00:29:33.830 Removing: /var/run/dpdk/spdk_pid61543 00:29:33.830 Removing: /var/run/dpdk/spdk_pid66610 00:29:33.830 Removing: /var/run/dpdk/spdk_pid73464 00:29:33.830 Removing: /var/run/dpdk/spdk_pid76476 00:29:33.830 Removing: /var/run/dpdk/spdk_pid88728 00:29:33.830 Removing: /var/run/dpdk/spdk_pid99333 00:29:33.830 Clean 00:29:34.092 02:49:07 -- common/autotest_common.sh@1437 -- # return 0 00:29:34.092 02:49:07 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:34.092 02:49:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:34.092 02:49:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.092 02:49:07 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:34.092 02:49:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:34.092 02:49:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.092 02:49:07 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:34.092 02:49:07 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:34.092 02:49:07 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:34.092 02:49:07 -- spdk/autotest.sh@389 -- # hash lcov 00:29:34.092 02:49:07 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:34.092 02:49:07 -- spdk/autotest.sh@391 -- # hostname 00:29:34.092 02:49:07 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:34.353 geninfo: WARNING: invalid characters removed from testname! 00:30:01.018 02:49:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.018 02:49:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.961 02:49:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:03.877 02:49:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:05.793 02:49:38 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:07.178 02:49:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:08.566 02:49:41 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:08.566 02:49:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.566 02:49:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:08.566 02:49:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.566 02:49:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.566 02:49:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.566 02:49:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.566 02:49:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.566 02:49:42 -- paths/export.sh@5 -- $ export PATH 00:30:08.566 02:49:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.566 02:49:42 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:08.566 02:49:42 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:08.566 02:49:42 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714178982.XXXXXX 00:30:08.566 02:49:42 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714178982.LpgyEp 00:30:08.566 02:49:42 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:08.566 02:49:42 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:08.566 02:49:42 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:08.566 02:49:42 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:08.566 02:49:42 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:08.566 02:49:42 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:08.566 02:49:42 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:08.566 02:49:42 -- common/autotest_common.sh@10 -- $ set +x 00:30:08.566 02:49:42 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:08.566 02:49:42 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:08.566 02:49:42 -- pm/common@17 -- $ local monitor 00:30:08.566 02:49:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.566 02:49:42 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=340714 00:30:08.566 02:49:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.566 02:49:42 -- pm/common@21 -- $ date +%s 00:30:08.566 02:49:42 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=340716 00:30:08.566 02:49:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.566 02:49:42 -- pm/common@21 -- $ date +%s 00:30:08.566 02:49:42 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=340719 00:30:08.566 02:49:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.566 02:49:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714178982 00:30:08.566 02:49:42 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=340722 00:30:08.566 02:49:42 -- pm/common@26 -- $ sleep 1 00:30:08.566 02:49:42 -- pm/common@21 -- $ date +%s 00:30:08.566 02:49:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714178982 00:30:08.566 02:49:42 -- pm/common@21 -- $ date +%s 00:30:08.566 02:49:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714178982 00:30:08.566 02:49:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714178982 00:30:08.566 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714178982_collect-cpu-load.pm.log 00:30:08.827 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714178982_collect-bmc-pm.bmc.pm.log 00:30:08.827 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714178982_collect-vmstat.pm.log 00:30:08.827 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714178982_collect-cpu-temp.pm.log 00:30:09.769 02:49:43 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:09.769 02:49:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:09.769 02:49:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.769 02:49:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:09.769 02:49:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:09.769 02:49:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:09.769 02:49:43 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:09.769 02:49:43 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:09.769 02:49:43 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:09.769 02:49:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:09.769 02:49:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:09.769 02:49:43 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:09.769 02:49:43 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:09.769 02:49:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.769 02:49:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:09.769 02:49:43 -- pm/common@45 -- $ pid=340727 00:30:09.769 02:49:43 -- pm/common@52 -- $ sudo kill -TERM 340727 00:30:09.769 02:49:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.770 02:49:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:09.770 02:49:43 -- pm/common@45 -- $ pid=340733 00:30:09.770 02:49:43 -- pm/common@52 -- $ sudo kill -TERM 340733 00:30:09.770 02:49:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.770 02:49:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:09.770 02:49:43 -- pm/common@45 -- $ pid=340734 00:30:09.770 02:49:43 -- pm/common@52 -- $ sudo kill -TERM 340734 00:30:09.770 02:49:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.770 02:49:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:09.770 02:49:43 -- pm/common@45 -- $ pid=340735 00:30:09.770 02:49:43 -- pm/common@52 -- $ sudo kill -TERM 340735 00:30:09.770 + [[ -n 3984337 ]] 00:30:09.770 + sudo kill 3984337 00:30:09.779 [Pipeline] } 00:30:09.793 [Pipeline] // stage 00:30:09.799 [Pipeline] } 00:30:09.812 [Pipeline] // timeout 00:30:09.816 [Pipeline] } 00:30:09.829 [Pipeline] // catchError 00:30:09.834 [Pipeline] } 00:30:09.849 [Pipeline] // wrap 00:30:09.856 [Pipeline] } 00:30:09.868 [Pipeline] // catchError 00:30:09.877 [Pipeline] stage 00:30:09.879 [Pipeline] { (Epilogue) 00:30:09.891 [Pipeline] catchError 00:30:09.892 [Pipeline] { 00:30:09.904 [Pipeline] echo 00:30:09.905 Cleanup processes 00:30:09.909 [Pipeline] sh 00:30:10.195 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.196 340811 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:10.196 341283 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.212 [Pipeline] sh 00:30:10.504 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.504 ++ awk '{print $1}' 00:30:10.504 ++ grep -v 'sudo pgrep' 00:30:10.504 + sudo kill -9 340811 00:30:10.518 [Pipeline] sh 00:30:10.807 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:20.824 [Pipeline] sh 00:30:21.113 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:21.113 Artifacts sizes are good 00:30:21.131 [Pipeline] archiveArtifacts 00:30:21.140 Archiving artifacts 00:30:21.340 [Pipeline] sh 00:30:21.655 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:21.672 [Pipeline] cleanWs 00:30:21.684 [WS-CLEANUP] Deleting project workspace... 00:30:21.684 [WS-CLEANUP] Deferred wipeout is used... 00:30:21.691 [WS-CLEANUP] done 00:30:21.693 [Pipeline] } 00:30:21.713 [Pipeline] // catchError 00:30:21.726 [Pipeline] sh 00:30:22.013 + logger -p user.info -t JENKINS-CI 00:30:22.024 [Pipeline] } 00:30:22.038 [Pipeline] // stage 00:30:22.043 [Pipeline] } 00:30:22.060 [Pipeline] // node 00:30:22.064 [Pipeline] End of Pipeline 00:30:22.104 Finished: SUCCESS